Test Report: Docker_Linux_crio 21830

                    
                      3aa0d58a4eff13dd9d5f058e659508fb4ffd2206:2025-11-01:42156
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 14.78
36 TestAddons/parallel/RegistryCreds 0.41
37 TestAddons/parallel/Ingress 150.02
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.37
41 TestAddons/parallel/CSI 43.4
42 TestAddons/parallel/Headlamp 2.65
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 10.11
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 5.25
47 TestAddons/parallel/AmdGpuDevicePlugin 5.25
97 TestFunctional/parallel/ServiceCmdConnect 602.88
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.94
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.22
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.01
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.29
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
143 TestFunctional/parallel/ServiceCmd/DeployApp 600.6
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.53
154 TestFunctional/parallel/ServiceCmd/URL 0.53
191 TestJSONOutput/pause/Command 2.18
197 TestJSONOutput/unpause/Command 1.68
286 TestPause/serial/Pause 7.03
345 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.36
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.21
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.11
366 TestStartStop/group/no-preload/serial/Pause 5.69
368 TestStartStop/group/old-k8s-version/serial/Pause 6.35
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.12
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.15
381 TestStartStop/group/embed-certs/serial/Pause 5.95
388 TestStartStop/group/newest-cni/serial/Pause 5.53
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 6
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable volcano --alsologtostderr -v=1: exit status 11 (252.515625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:17.952709   71200 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:17.953002   71200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:17.953013   71200 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:17.953017   71200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:17.953210   71200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:17.953490   71200 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:17.953856   71200 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:17.953872   71200 addons.go:607] checking whether the cluster is paused
	I1101 09:57:17.953950   71200 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:17.953972   71200 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:17.954329   71200 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:17.972325   71200 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:17.972380   71200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:17.989599   71200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:18.088392   71200 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:18.088490   71200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:18.117883   71200 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:18.117916   71200 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:18.117929   71200 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:18.117933   71200 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:18.117936   71200 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:18.117940   71200 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:18.117943   71200 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:18.117945   71200 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:18.117948   71200 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:18.117958   71200 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:18.117969   71200 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:18.117972   71200 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:18.117975   71200 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:18.117977   71200 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:18.117980   71200 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:18.117986   71200 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:18.117991   71200 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:18.117995   71200 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:18.117997   71200 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:18.117999   71200 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:18.118002   71200 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:18.118004   71200 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:18.118007   71200 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:18.118009   71200 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:18.118011   71200 cri.go:89] found id: ""
	I1101 09:57:18.118056   71200 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:18.132152   71200 out.go:203] 
	W1101 09:57:18.133216   71200 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:18.133233   71200 out.go:285] * 
	* 
	W1101 09:57:18.137389   71200 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:18.138589   71200 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.295802ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00299241s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00337902s
addons_test.go:392: (dbg) Run:  kubectl --context addons-407417 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-407417 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-407417 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.29374419s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 ip
2025/11/01 09:57:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable registry --alsologtostderr -v=1: exit status 11 (270.55719ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:43.492964   73935 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:43.493299   73935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:43.493311   73935 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:43.493318   73935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:43.493626   73935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:43.494012   73935 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:43.494464   73935 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:43.494482   73935 addons.go:607] checking whether the cluster is paused
	I1101 09:57:43.494722   73935 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:43.494764   73935 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:43.495254   73935 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:43.512656   73935 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:43.512701   73935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:43.532181   73935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:43.638115   73935 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:43.638196   73935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:43.673292   73935 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:43.673311   73935 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:43.673314   73935 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:43.673318   73935 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:43.673320   73935 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:43.673323   73935 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:43.673325   73935 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:43.673328   73935 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:43.673330   73935 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:43.673335   73935 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:43.673337   73935 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:43.673340   73935 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:43.673343   73935 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:43.673345   73935 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:43.673348   73935 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:43.673354   73935 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:43.673360   73935 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:43.673364   73935 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:43.673366   73935 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:43.673368   73935 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:43.673371   73935 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:43.673373   73935 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:43.673375   73935 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:43.673378   73935 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:43.673380   73935 cri.go:89] found id: ""
	I1101 09:57:43.673415   73935 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:43.687276   73935 out.go:203] 
	W1101 09:57:43.688615   73935 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:43.688644   73935 out.go:285] * 
	* 
	W1101 09:57:43.693029   73935 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:43.695588   73935 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.78s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.014999ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-407417
addons_test.go:332: (dbg) Run:  kubectl --context addons-407417 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (247.296916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:49.171539   74432 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:49.171838   74432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:49.171850   74432 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:49.171854   74432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:49.172080   74432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:49.172308   74432 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:49.172649   74432 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:49.172664   74432 addons.go:607] checking whether the cluster is paused
	I1101 09:57:49.172746   74432 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:49.172761   74432 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:49.173128   74432 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:49.189780   74432 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:49.189829   74432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:49.206211   74432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:49.304207   74432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:49.304303   74432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:49.333890   74432 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:49.333910   74432 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:49.333914   74432 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:49.333917   74432 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:49.333919   74432 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:49.333923   74432 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:49.333925   74432 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:49.333928   74432 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:49.333930   74432 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:49.333935   74432 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:49.333937   74432 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:49.333939   74432 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:49.333941   74432 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:49.333944   74432 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:49.333946   74432 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:49.333988   74432 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:49.333996   74432 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:49.334000   74432 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:49.334002   74432 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:49.334004   74432 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:49.334007   74432 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:49.334009   74432 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:49.334011   74432 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:49.334014   74432 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:49.334016   74432 cri.go:89] found id: ""
	I1101 09:57:49.334057   74432 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:49.347839   74432 out.go:203] 
	W1101 09:57:49.348972   74432 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:49.348998   74432 out.go:285] * 
	* 
	W1101 09:57:49.353071   74432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:49.354378   74432 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (150.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-407417 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-407417 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-407417 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1d61b693-f1a8-4dee-a2be-13dfa6ad7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1d61b693-f1a8-4dee-a2be-13dfa6ad7e0d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003472433s
I1101 09:57:51.496528   61522 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.35323446s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-407417 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-407417
helpers_test.go:243: (dbg) docker inspect addons-407417:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d",
	        "Created": "2025-11-01T09:55:07.745378868Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 63564,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:55:07.778921511Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d/hosts",
	        "LogPath": "/var/lib/docker/containers/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d-json.log",
	        "Name": "/addons-407417",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-407417:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-407417",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d",
	                "LowerDir": "/var/lib/docker/overlay2/291d0f4817314d287a487f35ac3897afc0ecc7fe87b00b9144bd88abe3c60b06-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/291d0f4817314d287a487f35ac3897afc0ecc7fe87b00b9144bd88abe3c60b06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/291d0f4817314d287a487f35ac3897afc0ecc7fe87b00b9144bd88abe3c60b06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/291d0f4817314d287a487f35ac3897afc0ecc7fe87b00b9144bd88abe3c60b06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-407417",
	                "Source": "/var/lib/docker/volumes/addons-407417/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-407417",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-407417",
	                "name.minikube.sigs.k8s.io": "addons-407417",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85c889e1490ed7288b2eebae0ef1c6b5e3585f156ba757532f98e9a94ab85cdb",
	            "SandboxKey": "/var/run/docker/netns/85c889e1490e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-407417": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:c6:88:98:aa:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1118a8e501685a515edaa4b953c5701ac59aa6d5e4c88c554f16e9c1e729e89a",
	                    "EndpointID": "3f13917744a420456b082e7060e56442c956eb32e5d032775bda172d5bb372e7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-407417",
	                        "ed2470031382"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-407417 -n addons-407417
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-407417 logs -n 25: (1.230726822s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-031328 --alsologtostderr --binary-mirror http://127.0.0.1:41345 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-031328 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ delete  │ -p binary-mirror-031328                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-031328 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ addons  │ enable dashboard -p addons-407417                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ addons  │ disable dashboard -p addons-407417                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ start   │ -p addons-407417 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:57 UTC │
	│ addons  │ addons-407417 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-407417 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ ssh     │ addons-407417 ssh cat /opt/local-path-provisioner/pvc-82607fe7-5a15-4749-9c83-e78b928a7386_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ addons  │ addons-407417 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ ip      │ addons-407417 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ addons  │ addons-407417 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-407417                                                                                                                                                                                                                                                                                                                                                                                           │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:57 UTC │
	│ addons  │ addons-407417 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ ssh     │ addons-407417 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:58 UTC │                     │
	│ addons  │ addons-407417 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 09:58 UTC │                     │
	│ ip      │ addons-407417 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-407417        │ jenkins │ v1.37.0 │ 01 Nov 25 10:00 UTC │ 01 Nov 25 10:00 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:54:47
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:54:47.664744   62924 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:54:47.664990   62924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:47.664999   62924 out.go:374] Setting ErrFile to fd 2...
	I1101 09:54:47.665003   62924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:47.665213   62924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:54:47.665733   62924 out.go:368] Setting JSON to false
	I1101 09:54:47.666517   62924 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5828,"bootTime":1761985060,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:54:47.666599   62924 start.go:143] virtualization: kvm guest
	I1101 09:54:47.668301   62924 out.go:179] * [addons-407417] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:54:47.669336   62924 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:54:47.669355   62924 notify.go:221] Checking for updates...
	I1101 09:54:47.671457   62924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:54:47.672525   62924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 09:54:47.673590   62924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 09:54:47.674609   62924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:54:47.675656   62924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:54:47.676874   62924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:54:47.697989   62924 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:54:47.698135   62924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:54:47.754235   62924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-01 09:54:47.745337495 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:54:47.754344   62924 docker.go:319] overlay module found
	I1101 09:54:47.755751   62924 out.go:179] * Using the docker driver based on user configuration
	I1101 09:54:47.756712   62924 start.go:309] selected driver: docker
	I1101 09:54:47.756728   62924 start.go:930] validating driver "docker" against <nil>
	I1101 09:54:47.756739   62924 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:54:47.757256   62924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:54:47.816269   62924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-01 09:54:47.806123113 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:54:47.816410   62924 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:54:47.816639   62924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:54:47.818154   62924 out.go:179] * Using Docker driver with root privileges
	I1101 09:54:47.819109   62924 cni.go:84] Creating CNI manager for ""
	I1101 09:54:47.819174   62924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:54:47.819186   62924 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:54:47.819247   62924 start.go:353] cluster config:
	{Name:addons-407417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 09:54:47.820367   62924 out.go:179] * Starting "addons-407417" primary control-plane node in "addons-407417" cluster
	I1101 09:54:47.821370   62924 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:54:47.822316   62924 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:54:47.823280   62924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:54:47.823308   62924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:54:47.823312   62924 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:54:47.823406   62924 cache.go:59] Caching tarball of preloaded images
	I1101 09:54:47.823517   62924 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:54:47.823530   62924 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:54:47.823847   62924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/config.json ...
	I1101 09:54:47.823869   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/config.json: {Name:mk11a6cb83771ab7cf7d8557dde1fee66bcc7743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:54:47.838821   62924 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:54:47.838922   62924 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:54:47.838938   62924 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:54:47.838942   62924 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:54:47.838952   62924 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:54:47.838959   62924 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 09:55:00.692543   62924 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 09:55:00.692595   62924 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:55:00.692644   62924 start.go:360] acquireMachinesLock for addons-407417: {Name:mk47dbd797c97fe05e1b91d4d97e970ae666c44c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:55:00.692747   62924 start.go:364] duration metric: took 80.733µs to acquireMachinesLock for "addons-407417"
	I1101 09:55:00.692771   62924 start.go:93] Provisioning new machine with config: &{Name:addons-407417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:55:00.692849   62924 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:55:00.694609   62924 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 09:55:00.694848   62924 start.go:159] libmachine.API.Create for "addons-407417" (driver="docker")
	I1101 09:55:00.694881   62924 client.go:173] LocalClient.Create starting
	I1101 09:55:00.695000   62924 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem
	I1101 09:55:00.782654   62924 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem
	I1101 09:55:00.873546   62924 cli_runner.go:164] Run: docker network inspect addons-407417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:55:00.889840   62924 cli_runner.go:211] docker network inspect addons-407417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:55:00.889921   62924 network_create.go:284] running [docker network inspect addons-407417] to gather additional debugging logs...
	I1101 09:55:00.889943   62924 cli_runner.go:164] Run: docker network inspect addons-407417
	W1101 09:55:00.906087   62924 cli_runner.go:211] docker network inspect addons-407417 returned with exit code 1
	I1101 09:55:00.906116   62924 network_create.go:287] error running [docker network inspect addons-407417]: docker network inspect addons-407417: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-407417 not found
	I1101 09:55:00.906142   62924 network_create.go:289] output of [docker network inspect addons-407417]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-407417 not found
	
	** /stderr **
	I1101 09:55:00.906266   62924 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:55:00.922920   62924 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86370}
	I1101 09:55:00.922960   62924 network_create.go:124] attempt to create docker network addons-407417 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 09:55:00.923006   62924 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-407417 addons-407417
	I1101 09:55:00.980988   62924 network_create.go:108] docker network addons-407417 192.168.49.0/24 created
	I1101 09:55:00.981022   62924 kic.go:121] calculated static IP "192.168.49.2" for the "addons-407417" container
	I1101 09:55:00.981078   62924 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:55:00.995618   62924 cli_runner.go:164] Run: docker volume create addons-407417 --label name.minikube.sigs.k8s.io=addons-407417 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:55:01.013315   62924 oci.go:103] Successfully created a docker volume addons-407417
	I1101 09:55:01.013403   62924 cli_runner.go:164] Run: docker run --rm --name addons-407417-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-407417 --entrypoint /usr/bin/test -v addons-407417:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:55:03.403008   62924 cli_runner.go:217] Completed: docker run --rm --name addons-407417-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-407417 --entrypoint /usr/bin/test -v addons-407417:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.38956461s)
	I1101 09:55:03.403038   62924 oci.go:107] Successfully prepared a docker volume addons-407417
	I1101 09:55:03.403078   62924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:55:03.403103   62924 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:55:03.403162   62924 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-407417:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:55:07.671369   62924 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-407417:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.26814786s)
	I1101 09:55:07.671402   62924 kic.go:203] duration metric: took 4.268295299s to extract preloaded images to volume ...
	W1101 09:55:07.671522   62924 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:55:07.671568   62924 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:55:07.671609   62924 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:55:07.729582   62924 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-407417 --name addons-407417 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-407417 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-407417 --network addons-407417 --ip 192.168.49.2 --volume addons-407417:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:55:08.014648   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Running}}
	I1101 09:55:08.032880   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:08.050839   62924 cli_runner.go:164] Run: docker exec addons-407417 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:55:08.099536   62924 oci.go:144] the created container "addons-407417" has a running status.
	I1101 09:55:08.099575   62924 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa...
	I1101 09:55:08.144791   62924 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:55:08.171924   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:08.189746   62924 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:55:08.189770   62924 kic_runner.go:114] Args: [docker exec --privileged addons-407417 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:55:08.228169   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:08.249664   62924 machine.go:94] provisionDockerMachine start ...
	I1101 09:55:08.249798   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:08.271342   62924 main.go:143] libmachine: Using SSH client type: native
	I1101 09:55:08.271607   62924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 09:55:08.271622   62924 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:55:08.272258   62924 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46100->127.0.0.1:32768: read: connection reset by peer
	I1101 09:55:11.414783   62924 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-407417
	
	I1101 09:55:11.414820   62924 ubuntu.go:182] provisioning hostname "addons-407417"
	I1101 09:55:11.414915   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:11.432301   62924 main.go:143] libmachine: Using SSH client type: native
	I1101 09:55:11.432622   62924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 09:55:11.432642   62924 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-407417 && echo "addons-407417" | sudo tee /etc/hostname
	I1101 09:55:11.581758   62924 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-407417
	
	I1101 09:55:11.581836   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:11.600536   62924 main.go:143] libmachine: Using SSH client type: native
	I1101 09:55:11.600749   62924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 09:55:11.600765   62924 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-407417' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-407417/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-407417' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:55:11.742580   62924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:55:11.742614   62924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 09:55:11.742666   62924 ubuntu.go:190] setting up certificates
	I1101 09:55:11.742683   62924 provision.go:84] configureAuth start
	I1101 09:55:11.742745   62924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-407417
	I1101 09:55:11.759741   62924 provision.go:143] copyHostCerts
	I1101 09:55:11.759820   62924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 09:55:11.759939   62924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 09:55:11.760060   62924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 09:55:11.760128   62924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.addons-407417 san=[127.0.0.1 192.168.49.2 addons-407417 localhost minikube]
	I1101 09:55:11.808391   62924 provision.go:177] copyRemoteCerts
	I1101 09:55:11.808457   62924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:55:11.808506   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:11.827287   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:11.928317   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:55:11.948338   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:55:11.966161   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:55:11.983903   62924 provision.go:87] duration metric: took 241.20484ms to configureAuth
	I1101 09:55:11.983936   62924 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:55:11.984121   62924 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:55:11.984224   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.000880   62924 main.go:143] libmachine: Using SSH client type: native
	I1101 09:55:12.001136   62924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 09:55:12.001160   62924 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:55:12.254702   62924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:55:12.254741   62924 machine.go:97] duration metric: took 4.005048324s to provisionDockerMachine
	I1101 09:55:12.254756   62924 client.go:176] duration metric: took 11.559866083s to LocalClient.Create
	I1101 09:55:12.254783   62924 start.go:167] duration metric: took 11.5599355s to libmachine.API.Create "addons-407417"
	I1101 09:55:12.254793   62924 start.go:293] postStartSetup for "addons-407417" (driver="docker")
	I1101 09:55:12.254806   62924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:55:12.254901   62924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:55:12.254957   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.271962   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:12.373654   62924 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:55:12.377056   62924 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:55:12.377085   62924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:55:12.377099   62924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 09:55:12.377166   62924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 09:55:12.377200   62924 start.go:296] duration metric: took 122.400126ms for postStartSetup
	I1101 09:55:12.377535   62924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-407417
	I1101 09:55:12.393732   62924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/config.json ...
	I1101 09:55:12.394011   62924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:55:12.394068   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.410056   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:12.506765   62924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:55:12.511141   62924 start.go:128] duration metric: took 11.81827633s to createHost
	I1101 09:55:12.511165   62924 start.go:83] releasing machines lock for "addons-407417", held for 11.818404827s
	I1101 09:55:12.511243   62924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-407417
	I1101 09:55:12.527840   62924 ssh_runner.go:195] Run: cat /version.json
	I1101 09:55:12.527875   62924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:55:12.527897   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.527958   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.545740   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:12.546238   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:12.696130   62924 ssh_runner.go:195] Run: systemctl --version
	I1101 09:55:12.702538   62924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:55:12.737439   62924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:55:12.742051   62924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:55:12.742123   62924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:55:12.766912   62924 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:55:12.766947   62924 start.go:496] detecting cgroup driver to use...
	I1101 09:55:12.766999   62924 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:55:12.767045   62924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:55:12.782903   62924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:55:12.795098   62924 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:55:12.795164   62924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:55:12.811455   62924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:55:12.829116   62924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:55:12.907553   62924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:55:12.991074   62924 docker.go:234] disabling docker service ...
	I1101 09:55:12.991134   62924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:55:13.009695   62924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:55:13.022320   62924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:55:13.106107   62924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:55:13.182043   62924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:55:13.194011   62924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:55:13.207392   62924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:55:13.207445   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.217335   62924 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:55:13.217401   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.225826   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.234095   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.242290   62924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:55:13.249934   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.258277   62924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.271670   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.280273   62924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:55:13.287481   62924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 09:55:13.287554   62924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 09:55:13.299428   62924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:55:13.306908   62924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:55:13.385935   62924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:55:13.489559   62924 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:55:13.489632   62924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:55:13.493507   62924 start.go:564] Will wait 60s for crictl version
	I1101 09:55:13.493559   62924 ssh_runner.go:195] Run: which crictl
	I1101 09:55:13.497104   62924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:55:13.521265   62924 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:55:13.521351   62924 ssh_runner.go:195] Run: crio --version
	I1101 09:55:13.548096   62924 ssh_runner.go:195] Run: crio --version
	I1101 09:55:13.576326   62924 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:55:13.577365   62924 cli_runner.go:164] Run: docker network inspect addons-407417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:55:13.595055   62924 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:55:13.599188   62924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:55:13.609196   62924 kubeadm.go:884] updating cluster {Name:addons-407417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:55:13.609347   62924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:55:13.609419   62924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:55:13.641215   62924 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:55:13.641236   62924 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:55:13.641295   62924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:55:13.666899   62924 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:55:13.666923   62924 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:55:13.666933   62924 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:55:13.667052   62924 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-407417 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:55:13.667116   62924 ssh_runner.go:195] Run: crio config
	I1101 09:55:13.711542   62924 cni.go:84] Creating CNI manager for ""
	I1101 09:55:13.711570   62924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:55:13.711587   62924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:55:13.711614   62924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-407417 NodeName:addons-407417 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:55:13.711729   62924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-407417"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:55:13.711786   62924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:55:13.719753   62924 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:55:13.719820   62924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:55:13.727570   62924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:55:13.739768   62924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:55:13.754478   62924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 09:55:13.766996   62924 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:55:13.771080   62924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:55:13.781068   62924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:55:13.856030   62924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:55:13.880005   62924 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417 for IP: 192.168.49.2
	I1101 09:55:13.880032   62924 certs.go:195] generating shared ca certs ...
	I1101 09:55:13.880055   62924 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:13.880204   62924 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 09:55:14.539567   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt ...
	I1101 09:55:14.539600   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt: {Name:mk702db875df4acab57078dae280f2b2a2f2d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:14.539780   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key ...
	I1101 09:55:14.539793   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key: {Name:mk1e6252eae50628f5658754b8732e32c27dd8a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:14.539869   62924 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 09:55:14.618426   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt ...
	I1101 09:55:14.618455   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt: {Name:mk405201ebf4c9c1c06e402900eb7549fe0938be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:14.618653   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key ...
	I1101 09:55:14.618671   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key: {Name:mk3998e6fe53b349d815968bec1eef1bbde8c335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:14.618773   62924 certs.go:257] generating profile certs ...
	I1101 09:55:14.618835   62924 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.key
	I1101 09:55:14.618849   62924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt with IP's: []
	I1101 09:55:15.345709   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt ...
	I1101 09:55:15.345754   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: {Name:mkfa8088a21e6394b2b26b7b5a36db558b623a85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:15.345987   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.key ...
	I1101 09:55:15.346008   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.key: {Name:mk4eaeec3d3bce07da7288b24b08eb60314781f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:15.346109   62924 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key.f2421868
	I1101 09:55:15.346128   62924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt.f2421868 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 09:55:15.872583   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt.f2421868 ...
	I1101 09:55:15.872618   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt.f2421868: {Name:mke93356eb21971205323e03b3e9302323daf519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:15.872799   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key.f2421868 ...
	I1101 09:55:15.872813   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key.f2421868: {Name:mk5faaa6d1a4d4cbf379ee73a264d97714d15761 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:15.872886   62924 certs.go:382] copying /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt.f2421868 -> /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt
	I1101 09:55:15.873007   62924 certs.go:386] copying /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key.f2421868 -> /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key
	I1101 09:55:15.873067   62924 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.key
	I1101 09:55:15.873086   62924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.crt with IP's: []
	I1101 09:55:16.062169   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.crt ...
	I1101 09:55:16.062205   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.crt: {Name:mke7a14da0c291b6679a01bd7d8fb523f64c90d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:16.062384   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.key ...
	I1101 09:55:16.062396   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.key: {Name:mk1d47f2c26e2a0e004ee7360e3a4ab78937f762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:16.062588   62924 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:55:16.062626   62924 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:55:16.062650   62924 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:55:16.062682   62924 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 09:55:16.063295   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:55:16.081705   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:55:16.099646   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:55:16.118332   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:55:16.137167   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:55:16.154829   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:55:16.172278   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:55:16.189397   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:55:16.206743   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:55:16.226114   62924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:55:16.238713   62924 ssh_runner.go:195] Run: openssl version
	I1101 09:55:16.244808   62924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:55:16.255948   62924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:55:16.259972   62924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:55:16.260039   62924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:55:16.294415   62924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:55:16.303537   62924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:55:16.307294   62924 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:55:16.307344   62924 kubeadm.go:401] StartCluster: {Name:addons-407417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:55:16.307411   62924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:55:16.307476   62924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:55:16.335569   62924 cri.go:89] found id: ""
	I1101 09:55:16.335630   62924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:55:16.343861   62924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:55:16.352115   62924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:55:16.352169   62924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:55:16.360142   62924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:55:16.360160   62924 kubeadm.go:158] found existing configuration files:
	
	I1101 09:55:16.360222   62924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:55:16.367977   62924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:55:16.368047   62924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:55:16.375443   62924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:55:16.383132   62924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:55:16.383194   62924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:55:16.390790   62924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:55:16.398343   62924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:55:16.398395   62924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:55:16.405734   62924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:55:16.413152   62924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:55:16.413215   62924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:55:16.420625   62924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:55:16.459366   62924 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:55:16.459479   62924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:55:16.479978   62924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:55:16.480073   62924 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:55:16.480125   62924 kubeadm.go:319] OS: Linux
	I1101 09:55:16.480232   62924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:55:16.480320   62924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:55:16.480406   62924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:55:16.480483   62924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:55:16.480570   62924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:55:16.480644   62924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:55:16.480721   62924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:55:16.480787   62924 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:55:16.540622   62924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:55:16.540819   62924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:55:16.540973   62924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:55:16.547809   62924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:55:16.550349   62924 out.go:252]   - Generating certificates and keys ...
	I1101 09:55:16.550431   62924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:55:16.550516   62924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:55:16.658354   62924 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:55:17.025451   62924 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:55:17.271071   62924 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:55:17.381663   62924 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:55:17.761923   62924 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:55:17.762072   62924 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-407417 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:55:17.873959   62924 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:55:17.874080   62924 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-407417 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:55:18.070211   62924 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:55:18.324120   62924 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:55:18.621648   62924 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:55:18.621754   62924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:55:18.710522   62924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:55:18.768529   62924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:55:18.835765   62924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:55:19.232946   62924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:55:19.439650   62924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:55:19.440006   62924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:55:19.443644   62924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:55:19.445126   62924 out.go:252]   - Booting up control plane ...
	I1101 09:55:19.445252   62924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:55:19.445352   62924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:55:19.446393   62924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:55:19.459369   62924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:55:19.459540   62924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:55:19.465419   62924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:55:19.465695   62924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:55:19.465789   62924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:55:19.561433   62924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:55:19.561597   62924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:55:20.563012   62924 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001637649s
	I1101 09:55:20.566876   62924 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:55:20.567028   62924 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 09:55:20.567175   62924 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:55:20.567335   62924 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:55:21.574881   62924 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.007964126s
	I1101 09:55:22.264739   62924 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.697829311s
	I1101 09:55:24.069015   62924 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502086111s
	I1101 09:55:24.080520   62924 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:55:24.089590   62924 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:55:24.097392   62924 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:55:24.097699   62924 kubeadm.go:319] [mark-control-plane] Marking the node addons-407417 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:55:24.104739   62924 kubeadm.go:319] [bootstrap-token] Using token: vt78av.szo12tr3p6vo9ys2
	I1101 09:55:24.106010   62924 out.go:252]   - Configuring RBAC rules ...
	I1101 09:55:24.106144   62924 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:55:24.109694   62924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:55:24.113909   62924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:55:24.116190   62924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:55:24.118149   62924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:55:24.121001   62924 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:55:24.475388   62924 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:55:24.890245   62924 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:55:25.474859   62924 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:55:25.475873   62924 kubeadm.go:319] 
	I1101 09:55:25.475980   62924 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:55:25.475995   62924 kubeadm.go:319] 
	I1101 09:55:25.476110   62924 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:55:25.476121   62924 kubeadm.go:319] 
	I1101 09:55:25.476161   62924 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:55:25.476255   62924 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:55:25.476336   62924 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:55:25.476343   62924 kubeadm.go:319] 
	I1101 09:55:25.476446   62924 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:55:25.476465   62924 kubeadm.go:319] 
	I1101 09:55:25.476561   62924 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:55:25.476578   62924 kubeadm.go:319] 
	I1101 09:55:25.476656   62924 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:55:25.476779   62924 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:55:25.476877   62924 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:55:25.476889   62924 kubeadm.go:319] 
	I1101 09:55:25.476990   62924 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:55:25.477107   62924 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:55:25.477118   62924 kubeadm.go:319] 
	I1101 09:55:25.477230   62924 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vt78av.szo12tr3p6vo9ys2 \
	I1101 09:55:25.477373   62924 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:940bb8e1f96ef3c88df818902bd8202f25d19108c9c93fa4896a1f509b4cfb64 \
	I1101 09:55:25.477405   62924 kubeadm.go:319] 	--control-plane 
	I1101 09:55:25.477415   62924 kubeadm.go:319] 
	I1101 09:55:25.477545   62924 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:55:25.477557   62924 kubeadm.go:319] 
	I1101 09:55:25.477660   62924 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vt78av.szo12tr3p6vo9ys2 \
	I1101 09:55:25.477801   62924 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:940bb8e1f96ef3c88df818902bd8202f25d19108c9c93fa4896a1f509b4cfb64 
	I1101 09:55:25.479043   62924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:55:25.479162   62924 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:55:25.479176   62924 cni.go:84] Creating CNI manager for ""
	I1101 09:55:25.479183   62924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:55:25.480585   62924 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:55:25.481607   62924 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:55:25.486015   62924 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:55:25.486031   62924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:55:25.498482   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:55:25.690303   62924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:55:25.690431   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:25.690438   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-407417 minikube.k8s.io/updated_at=2025_11_01T09_55_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=addons-407417 minikube.k8s.io/primary=true
	I1101 09:55:25.699707   62924 ops.go:34] apiserver oom_adj: -16
	I1101 09:55:25.764741   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:26.265251   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:26.765111   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:27.265132   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:27.765676   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:28.265594   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:28.765817   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:29.265085   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:29.764973   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:30.265586   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:30.333601   62924 kubeadm.go:1114] duration metric: took 4.643265118s to wait for elevateKubeSystemPrivileges
	I1101 09:55:30.333650   62924 kubeadm.go:403] duration metric: took 14.02631068s to StartCluster
	I1101 09:55:30.333674   62924 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:30.333781   62924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 09:55:30.334887   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:30.335191   62924 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:55:30.335378   62924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:55:30.335566   62924 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:55:30.335711   62924 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:55:30.335793   62924 addons.go:70] Setting inspektor-gadget=true in profile "addons-407417"
	I1101 09:55:30.335791   62924 addons.go:70] Setting yakd=true in profile "addons-407417"
	I1101 09:55:30.335809   62924 addons.go:239] Setting addon inspektor-gadget=true in "addons-407417"
	I1101 09:55:30.335818   62924 addons.go:239] Setting addon yakd=true in "addons-407417"
	I1101 09:55:30.335824   62924 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-407417"
	I1101 09:55:30.335855   62924 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-407417"
	I1101 09:55:30.335864   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335840   62924 addons.go:70] Setting default-storageclass=true in profile "addons-407417"
	I1101 09:55:30.335872   62924 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-407417"
	I1101 09:55:30.335878   62924 addons.go:70] Setting registry-creds=true in profile "addons-407417"
	I1101 09:55:30.335893   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335906   62924 addons.go:239] Setting addon registry-creds=true in "addons-407417"
	I1101 09:55:30.336119   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335950   62924 addons.go:70] Setting cloud-spanner=true in profile "addons-407417"
	I1101 09:55:30.336221   62924 addons.go:239] Setting addon cloud-spanner=true in "addons-407417"
	I1101 09:55:30.336268   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335867   62924 addons.go:70] Setting metrics-server=true in profile "addons-407417"
	I1101 09:55:30.336307   62924 addons.go:239] Setting addon metrics-server=true in "addons-407417"
	I1101 09:55:30.336338   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.336711   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.336722   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.335975   62924 addons.go:70] Setting registry=true in profile "addons-407417"
	I1101 09:55:30.336817   62924 addons.go:239] Setting addon registry=true in "addons-407417"
	I1101 09:55:30.336847   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.336895   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.335984   62924 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-407417"
	I1101 09:55:30.336897   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.336714   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.336004   62924 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-407417"
	I1101 09:55:30.337071   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335991   62924 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-407417"
	I1101 09:55:30.337292   62924 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-407417"
	I1101 09:55:30.337320   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.336018   62924 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-407417"
	I1101 09:55:30.337530   62924 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-407417"
	I1101 09:55:30.337593   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.338002   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.338013   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.338029   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.338954   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.336018   62924 addons.go:70] Setting volcano=true in profile "addons-407417"
	I1101 09:55:30.344300   62924 addons.go:239] Setting addon volcano=true in "addons-407417"
	I1101 09:55:30.344386   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.336027   62924 addons.go:70] Setting volumesnapshots=true in profile "addons-407417"
	I1101 09:55:30.336037   62924 addons.go:70] Setting ingress=true in profile "addons-407417"
	I1101 09:55:30.336042   62924 addons.go:70] Setting ingress-dns=true in profile "addons-407417"
	I1101 09:55:30.336050   62924 addons.go:70] Setting gcp-auth=true in profile "addons-407417"
	I1101 09:55:30.335981   62924 addons.go:70] Setting storage-provisioner=true in profile "addons-407417"
	I1101 09:55:30.337021   62924 out.go:179] * Verifying Kubernetes components...
	I1101 09:55:30.335906   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.344890   62924 addons.go:239] Setting addon volumesnapshots=true in "addons-407417"
	I1101 09:55:30.344908   62924 addons.go:239] Setting addon ingress=true in "addons-407417"
	I1101 09:55:30.345202   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.345279   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.344921   62924 addons.go:239] Setting addon ingress-dns=true in "addons-407417"
	I1101 09:55:30.345556   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.345803   62924 mustload.go:66] Loading cluster: addons-407417
	I1101 09:55:30.345961   62924 addons.go:239] Setting addon storage-provisioner=true in "addons-407417"
	I1101 09:55:30.346052   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.348111   62924 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:55:30.348560   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349130   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349267   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349470   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349692   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349863   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.350590   62924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:55:30.360487   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.370657   62924 addons.go:239] Setting addon default-storageclass=true in "addons-407417"
	I1101 09:55:30.370709   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.371229   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.378031   62924 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:55:30.379124   62924 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:55:30.379146   62924 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:55:30.379233   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.389653   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:55:30.390700   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:55:30.392329   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:55:30.395762   62924 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:55:30.396261   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:55:30.398303   62924 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:55:30.398324   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:55:30.398388   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.401754   62924 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 09:55:30.402214   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:55:30.407099   62924 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:55:30.407124   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:55:30.407199   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.407386   62924 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:55:30.407536   62924 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:55:30.409117   62924 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:55:30.409329   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:55:30.409537   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.409249   62924 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:55:30.410795   62924 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:55:30.410853   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.412834   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:55:30.414348   62924 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:55:30.416077   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:55:30.416174   62924 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:55:30.416440   62924 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:55:30.417150   62924 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:55:30.417833   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:55:30.417887   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.417444   62924 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:55:30.419203   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:55:30.420566   62924 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:55:30.420586   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:55:30.420639   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.421931   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:55:30.421947   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:55:30.422040   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.422191   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:55:30.422202   62924 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:55:30.422253   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.440802   62924 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:55:30.441965   62924 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:55:30.441986   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:55:30.442064   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.446753   62924 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:55:30.448279   62924 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:55:30.448346   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:55:30.448433   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	W1101 09:55:30.451017   62924 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:55:30.453680   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:55:30.453923   62924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:55:30.454426   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.454650   62924 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-407417"
	I1101 09:55:30.454695   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.454819   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:55:30.454836   62924 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:55:30.454891   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.455135   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.462812   62924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:55:30.465095   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.470298   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.470368   62924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:55:30.471172   62924 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:55:30.471193   62924 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:55:30.471261   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.471550   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.472857   62924 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:55:30.472878   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:55:30.472937   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.480628   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.484746   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.486712   62924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:55:30.504624   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.507794   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.508293   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.509214   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.516695   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.518271   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.522869   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.525021   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.534459   62924 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W1101 09:55:30.534844   62924 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:55:30.536838   62924 retry.go:31] will retry after 257.655026ms: ssh: handshake failed: EOF
	I1101 09:55:30.536772   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.538084   62924 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:55:30.539071   62924 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:55:30.539123   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:55:30.539192   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.565703   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.566585   62924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:55:30.647526   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:55:30.649366   62924 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:30.649386   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:55:30.649413   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:55:30.649427   62924 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:55:30.669621   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:30.674479   62924 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:55:30.674517   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:55:30.679651   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:55:30.683055   62924 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:55:30.683076   62924 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:55:30.707569   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:55:30.707653   62924 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:55:30.708224   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:55:30.710066   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:55:30.711585   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:55:30.711651   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:55:30.715755   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:55:30.716310   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:55:30.735568   62924 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:55:30.735598   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:55:30.736109   62924 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:55:30.736128   62924 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:55:30.740592   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:55:30.740613   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:55:30.746283   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:55:30.763582   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:55:30.765850   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:55:30.765898   62924 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:55:30.766127   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:55:30.802693   62924 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:55:30.802717   62924 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:55:30.803103   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:55:30.805372   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:55:30.805437   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:55:30.814129   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:55:30.814149   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:55:30.865998   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:55:30.866852   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:55:30.866925   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:55:30.882000   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:55:30.938454   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:55:30.938575   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:55:30.958774   62924 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 09:55:30.960665   62924 node_ready.go:35] waiting up to 6m0s for node "addons-407417" to be "Ready" ...
	I1101 09:55:31.016663   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:55:31.016761   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:55:31.043303   62924 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:55:31.043411   62924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:55:31.080306   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:55:31.080360   62924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:55:31.130054   62924 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:55:31.130149   62924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:55:31.143261   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:55:31.143346   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:55:31.197521   62924 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:55:31.197624   62924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:55:31.230114   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:55:31.230216   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:55:31.249078   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:55:31.249189   62924 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:55:31.272928   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:55:31.273047   62924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:55:31.288237   62924 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:55:31.288354   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:55:31.319105   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:55:31.339355   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:55:31.468141   62924 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-407417" context rescaled to 1 replicas
	W1101 09:55:31.485853   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:31.485893   62924 retry.go:31] will retry after 311.464742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:31.797564   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:31.936227   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.18990242s)
	I1101 09:55:31.936270   62924 addons.go:480] Verifying addon ingress=true in "addons-407417"
	I1101 09:55:31.936355   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.17273797s)
	I1101 09:55:31.936413   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.170250146s)
	I1101 09:55:31.936571   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.070354216s)
	I1101 09:55:31.936593   62924 addons.go:480] Verifying addon metrics-server=true in "addons-407417"
	I1101 09:55:31.936458   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.133311623s)
	I1101 09:55:31.936638   62924 addons.go:480] Verifying addon registry=true in "addons-407417"
	I1101 09:55:31.936679   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.054656045s)
	I1101 09:55:31.937755   62924 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-407417 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:55:31.937764   62924 out.go:179] * Verifying ingress addon...
	I1101 09:55:31.937808   62924 out.go:179] * Verifying registry addon...
	I1101 09:55:31.940156   62924 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:55:31.940331   62924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:55:31.942801   62924 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:55:31.942826   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:31.942880   62924 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:55:31.942900   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:32.141586   62924 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-407417"
	I1101 09:55:32.143189   62924 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:55:32.145228   62924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:55:32.148804   62924 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:55:32.148821   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:32.444706   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:32.444759   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:32.465751   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.126350642s)
	W1101 09:55:32.465797   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:55:32.465825   62924 retry.go:31] will retry after 348.868544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	W1101 09:55:32.511143   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:32.511180   62924 retry.go:31] will retry after 538.435574ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:32.649032   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:32.815846   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:55:32.943051   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:32.943256   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:32.964311   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:33.050451   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:33.149406   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:33.443801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:33.443998   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:33.647944   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:33.943399   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:33.943549   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:34.148316   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:34.444116   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:34.444333   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:34.648878   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:34.943960   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:34.944072   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:35.148744   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:35.282459   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.466561073s)
	I1101 09:55:35.282554   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.232073363s)
	W1101 09:55:35.282593   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:35.282625   62924 retry.go:31] will retry after 497.339744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:35.443843   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:35.443897   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:35.462626   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:35.648822   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:35.780379   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:35.944100   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:35.944265   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:36.149042   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:55:36.313312   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:36.313348   62924 retry.go:31] will retry after 1.000582141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:36.443301   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:36.443536   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:36.648688   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:36.943816   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:36.944008   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:37.148919   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:37.314803   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:37.444840   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:37.444910   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:37.463405   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:37.648649   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:55:37.840791   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:37.840826   62924 retry.go:31] will retry after 1.024024598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:37.944219   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:37.944385   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:38.060906   62924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:55:38.060985   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:38.077255   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:38.148442   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:38.181261   62924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:55:38.193621   62924 addons.go:239] Setting addon gcp-auth=true in "addons-407417"
	I1101 09:55:38.193681   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:38.194032   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:38.211778   62924 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:55:38.211827   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:38.228040   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:38.324947   62924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:55:38.326049   62924 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:55:38.326952   62924 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:55:38.326967   62924 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:55:38.339666   62924 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:55:38.339686   62924 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:55:38.352129   62924 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:55:38.352151   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:55:38.364446   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:55:38.443256   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:38.443420   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:38.649809   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:38.659734   62924 addons.go:480] Verifying addon gcp-auth=true in "addons-407417"
	I1101 09:55:38.661061   62924 out.go:179] * Verifying gcp-auth addon...
	I1101 09:55:38.662968   62924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:55:38.750052   62924 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:55:38.750074   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:38.865113   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:38.943097   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:38.943309   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:39.148720   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:39.166120   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:55:39.397598   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:39.397631   62924 retry.go:31] will retry after 1.062181945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:39.443582   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:39.443698   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:39.463750   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:39.648701   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:39.666150   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:39.943075   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:39.943160   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:40.148818   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:40.165889   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:40.443875   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:40.444061   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:40.460166   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:40.648112   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:40.665624   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:40.943746   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:40.943881   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:40.985976   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:40.986019   62924 retry.go:31] will retry after 3.554677844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:41.148761   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:41.166011   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:41.443797   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:41.443985   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:41.648113   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:41.666461   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:41.943214   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:41.943361   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:41.964204   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:42.148944   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:42.166354   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:42.443285   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:42.443594   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:42.648746   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:42.665971   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:42.944257   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:42.944321   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:43.148471   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:43.165768   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:43.443726   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:43.443770   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:43.649224   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:43.665513   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:43.943594   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:43.943728   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:44.148723   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:44.166068   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:44.442873   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:44.442890   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:44.462940   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:44.541150   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:44.648481   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:44.666188   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:44.944192   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:44.944206   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:55:45.084419   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:45.084453   62924 retry.go:31] will retry after 5.991451126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:45.148301   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:45.166010   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:45.444246   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:45.444341   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:45.648556   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:45.666030   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:45.943760   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:45.943802   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:46.147860   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:46.166356   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:46.443115   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:46.443359   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:46.463487   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:46.648315   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:46.665621   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:46.943886   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:46.943897   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:47.148048   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:47.166596   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:47.443712   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:47.443922   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:47.648561   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:47.665971   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:47.944261   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:47.944311   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:48.148111   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:48.166206   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:48.442783   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:48.442927   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:48.648328   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:48.665737   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:48.943787   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:48.943951   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:48.963063   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:49.148920   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:49.165846   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:49.443752   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:49.443963   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:49.648730   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:49.666026   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:49.943590   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:49.943800   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:50.148671   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:50.166175   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:50.443097   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:50.443190   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:50.647815   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:50.665987   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:50.943190   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:50.943193   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:51.076292   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:51.148460   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:51.166050   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:51.443169   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:51.443194   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:51.463930   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	W1101 09:55:51.604296   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:51.604335   62924 retry.go:31] will retry after 8.682890672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:51.647626   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:51.665978   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:51.943858   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:51.943923   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:52.148772   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:52.166176   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:52.442853   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:52.443033   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:52.648724   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:52.666167   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:52.943443   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:52.943596   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:53.148915   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:53.165952   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:53.443657   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:53.443815   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:53.648553   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:53.665955   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:53.944074   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:53.944217   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:53.963652   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:54.148291   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:54.165601   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:54.443573   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:54.443636   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:54.648613   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:54.665920   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:54.944416   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:54.944465   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:55.148860   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:55.166188   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:55.442834   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:55.442921   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:55.648824   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:55.665896   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:55.943713   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:55.943913   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:56.148644   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:56.165797   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:56.443612   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:56.443806   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:56.464099   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:56.648763   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:56.666056   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:56.943000   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:56.943175   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:57.148126   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:57.165569   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:57.443483   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:57.443631   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:57.648626   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:57.665521   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:57.943473   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:57.943476   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:58.148230   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:58.165327   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:58.443045   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:58.443185   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:58.648081   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:58.666302   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:58.943100   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:58.943161   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:58.963154   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:59.148778   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:59.166146   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:59.442730   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:59.442913   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:59.648625   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:59.665891   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:59.943752   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:59.943796   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:00.148679   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:00.166041   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:00.288244   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:56:00.443443   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:00.444249   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:00.648288   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:00.666213   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:56:00.815625   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:00.815662   62924 retry.go:31] will retry after 8.529180304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:00.943719   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:00.943783   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:00.963888   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:01.148717   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:01.166620   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:01.443584   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:01.443616   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:01.648632   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:01.666227   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:01.942995   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:01.943145   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:02.148547   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:02.166229   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:02.443158   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:02.443988   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:02.648620   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:02.666296   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:02.943408   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:02.943474   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:02.964297   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:03.149075   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:03.167039   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:03.444755   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:03.444793   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:03.648480   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:03.666124   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:03.943470   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:03.943639   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:04.148543   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:04.166232   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:04.443091   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:04.443148   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:04.649075   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:04.667156   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:04.943287   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:04.943295   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:05.148063   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:05.166654   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:05.443961   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:05.444214   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:05.463796   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:05.649092   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:05.665719   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:05.943552   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:05.943593   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:06.149007   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:06.166767   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:06.443664   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:06.443787   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:06.648264   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:06.665717   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:06.943592   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:06.943659   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:07.148682   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:07.166028   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:07.442936   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:07.443018   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:07.464133   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:07.648841   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:07.666374   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:07.943411   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:07.943490   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:08.148262   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:08.165877   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:08.443813   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:08.444041   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:08.648654   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:08.666416   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:08.943376   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:08.943597   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:09.148685   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:09.166177   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:09.345420   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:56:09.443640   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:09.443692   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:09.464187   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:09.647859   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:09.666443   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:56:09.884154   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:09.884188   62924 retry.go:31] will retry after 8.502826362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:09.943735   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:09.943920   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:10.148837   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:10.166095   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:10.443078   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:10.443118   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:10.648552   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:10.665840   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:10.943696   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:10.943743   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:11.148634   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:11.166013   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:11.442984   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:11.443136   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:11.648221   62924 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:56:11.648247   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:11.668569   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:11.946013   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:11.946146   62924 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:56:11.946159   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:11.963524   62924 node_ready.go:49] node "addons-407417" is "Ready"
	I1101 09:56:11.963557   62924 node_ready.go:38] duration metric: took 41.002865653s for node "addons-407417" to be "Ready" ...
	I1101 09:56:11.963577   62924 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:56:11.963719   62924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:56:11.984201   62924 api_server.go:72] duration metric: took 41.648963665s to wait for apiserver process to appear ...
	I1101 09:56:11.984233   62924 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:56:11.984278   62924 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 09:56:11.991254   62924 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 09:56:11.992307   62924 api_server.go:141] control plane version: v1.34.1
	I1101 09:56:11.992336   62924 api_server.go:131] duration metric: took 8.088882ms to wait for apiserver health ...
	I1101 09:56:11.992391   62924 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:56:12.051053   62924 system_pods.go:59] 20 kube-system pods found
	I1101 09:56:12.051165   62924 system_pods.go:61] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:12.051191   62924 system_pods.go:61] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:12.051217   62924 system_pods.go:61] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:12.051231   62924 system_pods.go:61] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:12.051241   62924 system_pods.go:61] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:12.051247   62924 system_pods.go:61] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:12.051254   62924 system_pods.go:61] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:12.051260   62924 system_pods.go:61] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:12.051273   62924 system_pods.go:61] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:12.051285   62924 system_pods.go:61] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:12.051309   62924 system_pods.go:61] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:12.051316   62924 system_pods.go:61] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:12.051324   62924 system_pods.go:61] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:12.051333   62924 system_pods.go:61] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:12.051345   62924 system_pods.go:61] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:12.051363   62924 system_pods.go:61] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:12.051375   62924 system_pods.go:61] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:12.051384   62924 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.051418   62924 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.051433   62924 system_pods.go:61] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:12.051442   62924 system_pods.go:74] duration metric: took 59.039724ms to wait for pod list to return data ...
	I1101 09:56:12.051457   62924 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:56:12.054133   62924 default_sa.go:45] found service account: "default"
	I1101 09:56:12.054158   62924 default_sa.go:55] duration metric: took 2.693328ms for default service account to be created ...
	I1101 09:56:12.054169   62924 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:56:12.148118   62924 system_pods.go:86] 20 kube-system pods found
	I1101 09:56:12.148156   62924 system_pods.go:89] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:12.148164   62924 system_pods.go:89] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:12.148174   62924 system_pods.go:89] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:12.148182   62924 system_pods.go:89] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:12.148190   62924 system_pods.go:89] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:12.148198   62924 system_pods.go:89] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:12.148205   62924 system_pods.go:89] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:12.148210   62924 system_pods.go:89] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:12.148215   62924 system_pods.go:89] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:12.148224   62924 system_pods.go:89] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:12.148233   62924 system_pods.go:89] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:12.148239   62924 system_pods.go:89] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:12.148247   62924 system_pods.go:89] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:12.148258   62924 system_pods.go:89] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:12.148268   62924 system_pods.go:89] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:12.148279   62924 system_pods.go:89] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:12.148287   62924 system_pods.go:89] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:12.148294   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.148306   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.148314   62924 system_pods.go:89] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:12.148335   62924 retry.go:31] will retry after 238.602811ms: missing components: kube-dns
	I1101 09:56:12.149169   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:12.165452   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:12.392951   62924 system_pods.go:86] 20 kube-system pods found
	I1101 09:56:12.392999   62924 system_pods.go:89] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:12.393009   62924 system_pods.go:89] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:12.393019   62924 system_pods.go:89] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:12.393027   62924 system_pods.go:89] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:12.393034   62924 system_pods.go:89] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:12.393042   62924 system_pods.go:89] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:12.393049   62924 system_pods.go:89] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:12.393058   62924 system_pods.go:89] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:12.393063   62924 system_pods.go:89] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:12.393072   62924 system_pods.go:89] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:12.393077   62924 system_pods.go:89] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:12.393087   62924 system_pods.go:89] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:12.393095   62924 system_pods.go:89] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:12.393107   62924 system_pods.go:89] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:12.393116   62924 system_pods.go:89] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:12.393122   62924 system_pods.go:89] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:12.393130   62924 system_pods.go:89] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:12.393138   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.393151   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.393161   62924 system_pods.go:89] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:12.393181   62924 retry.go:31] will retry after 357.856743ms: missing components: kube-dns
	I1101 09:56:12.444267   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:12.444301   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:12.648532   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:12.666448   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:12.757150   62924 system_pods.go:86] 20 kube-system pods found
	I1101 09:56:12.757192   62924 system_pods.go:89] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:12.757208   62924 system_pods.go:89] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:12.757219   62924 system_pods.go:89] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:12.757228   62924 system_pods.go:89] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:12.757241   62924 system_pods.go:89] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:12.757247   62924 system_pods.go:89] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:12.757259   62924 system_pods.go:89] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:12.757266   62924 system_pods.go:89] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:12.757271   62924 system_pods.go:89] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:12.757282   62924 system_pods.go:89] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:12.757287   62924 system_pods.go:89] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:12.757294   62924 system_pods.go:89] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:12.757302   62924 system_pods.go:89] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:12.757314   62924 system_pods.go:89] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:12.757321   62924 system_pods.go:89] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:12.757332   62924 system_pods.go:89] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:12.757353   62924 system_pods.go:89] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:12.757362   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.757378   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.757387   62924 system_pods.go:89] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:12.757408   62924 retry.go:31] will retry after 409.377431ms: missing components: kube-dns
	I1101 09:56:12.943939   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:12.944175   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:13.149265   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:13.165508   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:13.170922   62924 system_pods.go:86] 20 kube-system pods found
	I1101 09:56:13.170955   62924 system_pods.go:89] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:13.170963   62924 system_pods.go:89] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Running
	I1101 09:56:13.170973   62924 system_pods.go:89] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:13.170980   62924 system_pods.go:89] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:13.171003   62924 system_pods.go:89] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:13.171015   62924 system_pods.go:89] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:13.171022   62924 system_pods.go:89] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:13.171033   62924 system_pods.go:89] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:13.171039   62924 system_pods.go:89] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:13.171049   62924 system_pods.go:89] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:13.171058   62924 system_pods.go:89] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:13.171064   62924 system_pods.go:89] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:13.171072   62924 system_pods.go:89] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:13.171082   62924 system_pods.go:89] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:13.171095   62924 system_pods.go:89] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:13.171104   62924 system_pods.go:89] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:13.171113   62924 system_pods.go:89] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:13.171122   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:13.171135   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:13.171142   62924 system_pods.go:89] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Running
	I1101 09:56:13.171157   62924 system_pods.go:126] duration metric: took 1.116980044s to wait for k8s-apps to be running ...
	I1101 09:56:13.171180   62924 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:56:13.171235   62924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:56:13.185227   62924 system_svc.go:56] duration metric: took 14.034108ms WaitForService to wait for kubelet
	I1101 09:56:13.185267   62924 kubeadm.go:587] duration metric: took 42.850036668s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:56:13.185303   62924 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:56:13.188121   62924 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:56:13.188177   62924 node_conditions.go:123] node cpu capacity is 8
	I1101 09:56:13.188195   62924 node_conditions.go:105] duration metric: took 2.886045ms to run NodePressure ...
	I1101 09:56:13.188207   62924 start.go:242] waiting for startup goroutines ...
	I1101 09:56:13.443266   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:13.443309   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:13.648607   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:13.666086   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:13.943111   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:13.943261   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:14.149383   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:14.165628   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:14.443750   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:14.443809   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:14.648764   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:14.665890   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:14.945557   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:14.945886   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:15.149436   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:15.165691   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:15.443950   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:15.443977   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:15.649685   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:15.665924   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:15.944458   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:15.944572   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:16.149027   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:16.166100   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:16.442801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:16.442830   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:16.648961   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:16.666078   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:16.943171   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:16.943187   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:17.149252   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:17.165612   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:17.444025   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:17.444059   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:17.649526   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:17.665983   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:17.942987   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:17.943135   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:18.149395   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:18.165461   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:18.387770   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:56:18.444625   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:18.444801   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:18.648836   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:18.666148   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:18.944225   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:18.944279   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:19.073162   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:19.073198   62924 retry.go:31] will retry after 16.319584627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:19.149380   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:19.165710   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:19.444355   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:19.444389   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:19.648748   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:19.666511   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:19.943648   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:19.943737   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:20.149133   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:20.166585   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:20.444250   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:20.444371   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:20.648365   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:20.665984   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:20.945142   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:20.945278   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:21.149193   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:21.249824   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:21.444376   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:21.444386   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:21.648919   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:21.666798   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:21.944032   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:21.944073   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:22.149638   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:22.165671   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:22.444142   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:22.444161   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:22.649388   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:22.665881   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:22.944036   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:22.944194   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:23.149400   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:23.165470   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:23.443385   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:23.443461   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:23.648286   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:23.665380   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:23.942985   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:23.943087   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:24.149083   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:24.166221   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:24.443727   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:24.443732   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:24.648560   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:24.665702   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:24.944059   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:24.944212   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:25.149010   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:25.166390   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:25.443343   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:25.443349   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:25.648233   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:25.665042   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:25.942853   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:25.942874   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:26.149830   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:26.167644   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:26.445517   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:26.445811   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:26.649041   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:26.666096   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:26.943475   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:26.943525   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:27.148920   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:27.166649   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:27.444513   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:27.444560   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:27.649376   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:27.734058   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:27.944386   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:27.944426   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:28.157065   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:28.166703   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:28.443840   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:28.443930   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:28.649292   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:28.665798   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:28.944246   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:28.944374   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:29.148779   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:29.166348   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:29.443745   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:29.443763   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:29.649308   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:29.665846   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:29.943877   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:29.943957   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:30.168755   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:30.168806   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:30.444247   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:30.444263   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:30.648989   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:30.666573   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:30.943031   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:30.943299   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:31.148653   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:31.165917   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:31.444207   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:31.444286   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:31.649581   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:31.749476   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:31.944878   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:31.946029   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:32.148462   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:32.165922   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:32.444138   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:32.444178   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:32.649213   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:32.666521   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:32.943184   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:32.943362   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:33.149801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:33.167868   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:33.443469   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:33.443532   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:33.648775   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:33.665985   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:33.944774   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:33.944805   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:34.149332   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:34.166776   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:34.443957   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:34.444065   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:34.649315   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:34.666239   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:34.943161   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:34.943346   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:35.149240   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:35.165335   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:35.393650   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:56:35.443855   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:35.443953   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:35.648897   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:35.666081   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:35.943771   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:35.943837   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:56:35.948052   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:35.948088   62924 retry.go:31] will retry after 39.728567543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:36.150684   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:36.166361   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:36.444209   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:36.444247   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:36.649123   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:36.666606   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:36.944628   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:36.944672   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:37.148897   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:37.166199   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:37.443801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:37.443828   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:37.649178   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:37.666894   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:37.944407   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:37.944475   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:38.148602   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:38.165697   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:38.444122   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:38.444365   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:38.648925   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:38.666324   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:38.943321   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:38.943321   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:39.149408   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:39.165247   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:39.443313   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:39.443480   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:39.648189   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:39.666234   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:39.943340   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:39.943376   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:40.148396   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:40.166104   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:40.444896   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:40.444898   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:40.651162   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:40.666588   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:40.944078   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:40.944238   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:41.149691   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:41.166459   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:41.443871   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:41.443925   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:41.649136   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:41.666599   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:41.944043   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:41.944094   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:42.149359   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:42.165333   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:42.443301   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:42.443546   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:42.648193   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:42.665356   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:42.943226   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:42.943349   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:43.148429   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:43.165463   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:43.444112   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:43.444159   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:43.649784   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:43.666275   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:43.943189   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:43.943273   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:44.149897   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:44.166448   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:44.443760   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:44.443825   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:44.649180   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:44.665767   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:44.944007   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:44.944145   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:45.149462   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:45.165624   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:45.444568   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:45.447390   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:45.649519   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:45.668536   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:45.943908   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:45.943967   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:46.149425   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:46.165791   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:46.444106   62924 kapi.go:107] duration metric: took 1m14.503772828s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:56:46.444152   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:46.656451   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:46.665556   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:46.944323   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:47.148865   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:47.165781   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:47.444255   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:47.649641   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:47.666332   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:47.945267   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:48.148432   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:48.166189   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:48.447057   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:48.648775   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:48.666330   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:48.943903   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:49.149569   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:49.166140   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:49.443669   62924 kapi.go:107] duration metric: took 1m17.50350601s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:56:49.649059   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:49.666382   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:50.148640   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:50.165731   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:50.649330   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:50.665902   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:51.148651   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:51.166049   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:51.648311   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:51.665455   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:52.149659   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:52.166519   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:52.649095   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:52.666629   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:53.149170   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:53.165448   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:53.648359   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:53.665313   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:54.148253   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:54.165343   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:54.649244   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:54.665350   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:55.148733   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:55.166582   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:55.648811   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:55.666151   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:56.149439   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:56.167252   62924 kapi.go:107] duration metric: took 1m17.504279439s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:56:56.170202   62924 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-407417 cluster.
	I1101 09:56:56.171613   62924 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:56:56.172732   62924 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:56:56.648844   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:57.149383   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:57.649360   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:58.149881   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:58.649457   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:59.149470   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:59.648801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:57:00.149373   62924 kapi.go:107] duration metric: took 1m28.00413859s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:57:15.679269   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:57:16.227869   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:57:16.227987   62924 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:57:16.229761   62924 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1101 09:57:16.230697   62924 addons.go:515] duration metric: took 1m45.895139161s for enable addons: enabled=[registry-creds amd-gpu-device-plugin nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1101 09:57:16.230735   62924 start.go:247] waiting for cluster config update ...
	I1101 09:57:16.230760   62924 start.go:256] writing updated cluster config ...
	I1101 09:57:16.231008   62924 ssh_runner.go:195] Run: rm -f paused
	I1101 09:57:16.234923   62924 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:57:16.238354   62924 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gp9gr" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.242420   62924 pod_ready.go:94] pod "coredns-66bc5c9577-gp9gr" is "Ready"
	I1101 09:57:16.242442   62924 pod_ready.go:86] duration metric: took 4.065738ms for pod "coredns-66bc5c9577-gp9gr" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.244614   62924 pod_ready.go:83] waiting for pod "etcd-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.247958   62924 pod_ready.go:94] pod "etcd-addons-407417" is "Ready"
	I1101 09:57:16.247980   62924 pod_ready.go:86] duration metric: took 3.345138ms for pod "etcd-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.249914   62924 pod_ready.go:83] waiting for pod "kube-apiserver-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.253148   62924 pod_ready.go:94] pod "kube-apiserver-addons-407417" is "Ready"
	I1101 09:57:16.253170   62924 pod_ready.go:86] duration metric: took 3.232779ms for pod "kube-apiserver-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.254878   62924 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.638668   62924 pod_ready.go:94] pod "kube-controller-manager-addons-407417" is "Ready"
	I1101 09:57:16.638698   62924 pod_ready.go:86] duration metric: took 383.799572ms for pod "kube-controller-manager-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.838923   62924 pod_ready.go:83] waiting for pod "kube-proxy-f5sgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:17.239049   62924 pod_ready.go:94] pod "kube-proxy-f5sgj" is "Ready"
	I1101 09:57:17.239080   62924 pod_ready.go:86] duration metric: took 400.130153ms for pod "kube-proxy-f5sgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:17.439526   62924 pod_ready.go:83] waiting for pod "kube-scheduler-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:17.838369   62924 pod_ready.go:94] pod "kube-scheduler-addons-407417" is "Ready"
	I1101 09:57:17.838402   62924 pod_ready.go:86] duration metric: took 398.851419ms for pod "kube-scheduler-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:17.838419   62924 pod_ready.go:40] duration metric: took 1.603459028s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:57:17.881002   62924 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:57:17.882582   62924 out.go:179] * Done! kubectl is now configured to use "addons-407417" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:58:27 addons-407417 crio[780]: time="2025-11-01T09:58:27.713874179Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=d83d8618-27b6-40bb-a4d3-fb5884bcf8ab name=/runtime.v1.ImageService/PullImage
	Nov 01 09:58:27 addons-407417 crio[780]: time="2025-11-01T09:58:27.715543777Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.347460505Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=d83d8618-27b6-40bb-a4d3-fb5884bcf8ab name=/runtime.v1.ImageService/PullImage
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.348036589Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=583fe87c-c106-4a4d-a8ee-f4a88a3eec62 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.38085771Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=0d146a0e-3489-4912-9995-732142b3363e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.384348606Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-v2bwb/registry-creds" id=8397673d-38f7-411a-b77b-eea520f0532b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.38446847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.390306827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.390950218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.430650992Z" level=info msg="Created container 5ed5dd2937c65328e07c15a270f45c1de256fc5ec7e72e7b9515183b574a7a6e: kube-system/registry-creds-764b6fb674-v2bwb/registry-creds" id=8397673d-38f7-411a-b77b-eea520f0532b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.431075529Z" level=info msg="Starting container: 5ed5dd2937c65328e07c15a270f45c1de256fc5ec7e72e7b9515183b574a7a6e" id=e44a51ba-9c48-4d8f-829b-7d4aa4b3ec68 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:58:29 addons-407417 crio[780]: time="2025-11-01T09:58:29.432911902Z" level=info msg="Started container" PID=9058 containerID=5ed5dd2937c65328e07c15a270f45c1de256fc5ec7e72e7b9515183b574a7a6e description=kube-system/registry-creds-764b6fb674-v2bwb/registry-creds id=e44a51ba-9c48-4d8f-829b-7d4aa4b3ec68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c77a2b7dc1f9803fd39bfdcdb5fe11b6c2b56206712ebccc937f71be81b6ed6f
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.289202771Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-wkbbx/POD" id=35d8b85a-474a-4172-a917-e190cca1983d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.289312119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.295737829Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wkbbx Namespace:default ID:bf4070e86d6e6c2cae1b3065ec65ddb6503ea7979dc514cb88aab3a913bbd0be UID:cd65b974-6325-4413-b939-01f963298726 NetNS:/var/run/netns/22bdddec-b45f-4de3-8acb-5012d0cf9448 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012ad10}] Aliases:map[]}"
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.295773244Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-wkbbx to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.30691653Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wkbbx Namespace:default ID:bf4070e86d6e6c2cae1b3065ec65ddb6503ea7979dc514cb88aab3a913bbd0be UID:cd65b974-6325-4413-b939-01f963298726 NetNS:/var/run/netns/22bdddec-b45f-4de3-8acb-5012d0cf9448 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012ad10}] Aliases:map[]}"
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.30705678Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-wkbbx for CNI network kindnet (type=ptp)"
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.30797975Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.308779977Z" level=info msg="Ran pod sandbox bf4070e86d6e6c2cae1b3065ec65ddb6503ea7979dc514cb88aab3a913bbd0be with infra container: default/hello-world-app-5d498dc89-wkbbx/POD" id=35d8b85a-474a-4172-a917-e190cca1983d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.310143422Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e981c5ce-83da-4ab2-8532-ca61c4a4ac7c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.310309997Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e981c5ce-83da-4ab2-8532-ca61c4a4ac7c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.310355147Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=e981c5ce-83da-4ab2-8532-ca61c4a4ac7c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.311095697Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=134d31a1-6327-49d1-866b-05eb22eab22b name=/runtime.v1.ImageService/PullImage
	Nov 01 10:00:06 addons-407417 crio[780]: time="2025-11-01T10:00:06.322003587Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	5ed5dd2937c65       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   c77a2b7dc1f98       registry-creds-764b6fb674-v2bwb             kube-system
	0f5d95753eb60       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   0dbd933c1603a       nginx                                       default
	a8e80b4f1ffa7       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   d4986ec902ef1       busybox                                     default
	f08090ded8635       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago        Running             csi-snapshotter                          0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	4a12913519788       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	fa059e5944f6d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	fa38ee36042b1       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	7e4b0adebd6e4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   7c5232e995896       gcp-auth-78565c9fb4-xnctl                   gcp-auth
	01e2a427bdd0a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   d36b2d107f3ec       gadget-swsl2                                gadget
	e2fa965dde20e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	b4c2b31c1d67c       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   e5c96b9292f4f       ingress-nginx-controller-675c5ddd98-2fxqb   ingress-nginx
	26ca487996d46       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   a0542e1695748       registry-proxy-cz772                        kube-system
	209034ab12f22       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   07018c2dc826e       nvidia-device-plugin-daemonset-z5mvf        kube-system
	227a3dea494bb       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   116cc4c7c5337       yakd-dashboard-5ff678cb9-7p2rl              yakd-dashboard
	bddb1deaf2b50       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	febf4ba9fa488       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   720f494368ff8       amd-gpu-device-plugin-f46dd                 kube-system
	3901514c12896       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   0613b67e144b5       snapshot-controller-7d9fbc56b8-nmmp8        kube-system
	dea585cf0fda5       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   dfc6c327de889       snapshot-controller-7d9fbc56b8-dtxff        kube-system
	1a43d3e93f88a       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   9db5c62c6a0d8       kube-ingress-dns-minikube                   kube-system
	b0fa2acbd6707       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   c59aa627a516f       local-path-provisioner-648f6765c9-zxm6m     local-path-storage
	f1cbdd3dea0c8       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   6e2b63c0901c1       csi-hostpath-attacher-0                     kube-system
	e9ee42459c8cc       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   88de74db94407       csi-hostpath-resizer-0                      kube-system
	c1eb0ca70ea95       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             3 minutes ago        Exited              patch                                    1                   41751ecaf9569       ingress-nginx-admission-patch-ppfh2         ingress-nginx
	8027ccccaa983       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   4748bf76000a3       ingress-nginx-admission-create-hqmb4        ingress-nginx
	84efcf67417dc       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   2e375288dcd20       cloud-spanner-emulator-86bd5cbb97-rvb7g     default
	c21e111d12956       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   84a6a789764c0       registry-6b586f9694-httq4                   kube-system
	ee71e1d3f20be       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   d245a22e7da01       metrics-server-85b7d694d7-tbn2d             kube-system
	1800341cdaf4e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   7d5dc12fac659       coredns-66bc5c9577-gp9gr                    kube-system
	c652bc696ccca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   e9b2ea2d0d732       storage-provisioner                         kube-system
	3e48e42054985       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   b09d806c02ee1       kindnet-662bf                               kube-system
	93000561a31e8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   c87fe66aa0e17       kube-proxy-f5sgj                            kube-system
	b02d1d64a55b7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   184c6f0792c8f       kube-apiserver-addons-407417                kube-system
	7f28a4faf3888       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   b87e1f7e6a0fc       kube-scheduler-addons-407417                kube-system
	6aaf19e53fbb2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   eb47bec278ac3       etcd-addons-407417                          kube-system
	4d0958fc37fb7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   603217c00fc86       kube-controller-manager-addons-407417       kube-system
	
	
	==> coredns [1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2] <==
	[INFO] 10.244.0.22:40012 - 54366 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006113419s
	[INFO] 10.244.0.22:52075 - 18071 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005821467s
	[INFO] 10.244.0.22:58890 - 31970 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005954755s
	[INFO] 10.244.0.22:33573 - 51820 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004648093s
	[INFO] 10.244.0.22:35075 - 19637 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007652379s
	[INFO] 10.244.0.22:51804 - 7827 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000794878s
	[INFO] 10.244.0.22:37747 - 7433 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001788876s
	[INFO] 10.244.0.27:51711 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00027923s
	[INFO] 10.244.0.27:35422 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000183845s
	[INFO] 10.244.0.31:44141 - 19129 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000217705s
	[INFO] 10.244.0.31:48728 - 22295 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000316281s
	[INFO] 10.244.0.31:38444 - 55559 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000149553s
	[INFO] 10.244.0.31:50562 - 25388 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000219916s
	[INFO] 10.244.0.31:43609 - 4148 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000132304s
	[INFO] 10.244.0.31:49572 - 46107 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000183293s
	[INFO] 10.244.0.31:37261 - 53994 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004440233s
	[INFO] 10.244.0.31:56169 - 8168 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.005317904s
	[INFO] 10.244.0.31:47642 - 13407 "AAAA IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.006555501s
	[INFO] 10.244.0.31:55083 - 56425 "A IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.007229444s
	[INFO] 10.244.0.31:52769 - 31636 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004461939s
	[INFO] 10.244.0.31:33655 - 24163 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.007483009s
	[INFO] 10.244.0.31:39942 - 65515 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005326658s
	[INFO] 10.244.0.31:45969 - 27034 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005544546s
	[INFO] 10.244.0.31:37592 - 5457 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001396539s
	[INFO] 10.244.0.31:39254 - 31586 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001449834s
	
	
	==> describe nodes <==
	Name:               addons-407417
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-407417
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=addons-407417
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_55_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-407417
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-407417"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:55:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-407417
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:00:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:58:59 +0000   Sat, 01 Nov 2025 09:55:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:58:59 +0000   Sat, 01 Nov 2025 09:55:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:58:59 +0000   Sat, 01 Nov 2025 09:55:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:58:59 +0000   Sat, 01 Nov 2025 09:56:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-407417
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                93d9c905-5f59-4697-8bdc-5b43720cd9fb
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  default                     cloud-spanner-emulator-86bd5cbb97-rvb7g      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  default                     hello-world-app-5d498dc89-wkbbx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-swsl2                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  gcp-auth                    gcp-auth-78565c9fb4-xnctl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-2fxqb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m36s
	  kube-system                 amd-gpu-device-plugin-f46dd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 coredns-66bc5c9577-gp9gr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m37s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 csi-hostpathplugin-znf7c                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-addons-407417                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m43s
	  kube-system                 kindnet-662bf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m37s
	  kube-system                 kube-apiserver-addons-407417                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-controller-manager-addons-407417        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-proxy-f5sgj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-scheduler-addons-407417                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 metrics-server-85b7d694d7-tbn2d              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m36s
	  kube-system                 nvidia-device-plugin-daemonset-z5mvf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 registry-6b586f9694-httq4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 registry-creds-764b6fb674-v2bwb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 registry-proxy-cz772                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 snapshot-controller-7d9fbc56b8-dtxff         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 snapshot-controller-7d9fbc56b8-nmmp8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  local-path-storage          local-path-provisioner-648f6765c9-zxm6m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7p2rl               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m35s  kube-proxy       
	  Normal  Starting                 4m43s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s  kubelet          Node addons-407417 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s  kubelet          Node addons-407417 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s  kubelet          Node addons-407417 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m38s  node-controller  Node addons-407417 event: Registered Node addons-407417 in Controller
	  Normal  NodeReady                3m56s  kubelet          Node addons-407417 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.077240] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020831] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.657102] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 1 09:57] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.028293] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023905] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023938] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023934] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +2.047845] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[Nov 1 09:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +8.191344] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[ +16.382718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[ +32.253574] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	
	
	==> etcd [6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda] <==
	{"level":"warn","ts":"2025-11-01T09:55:21.749448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.755782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.762332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.769079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.775124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.781359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.787205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.794169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.799883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.824990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.831002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.836885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:33.036513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:33.043060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:59.255060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:59.261408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:59.274046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:59.280228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:57:29.446981Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"159.99459ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041022195886575 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/test-pvc.1873d986176a7cea\" mod_revision:1286 > success:<request_put:<key:\"/registry/events/default/test-pvc.1873d986176a7cea\" value_size:818 lease:8128041022195886558 >> failure:<request_range:<key:\"/registry/events/default/test-pvc.1873d986176a7cea\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T09:57:29.447105Z","caller":"traceutil/trace.go:172","msg":"trace[1988407247] linearizableReadLoop","detail":"{readStateIndex:1330; appliedIndex:1329; }","duration":"158.563398ms","start":"2025-11-01T09:57:29.288525Z","end":"2025-11-01T09:57:29.447089Z","steps":["trace[1988407247] 'read index received'  (duration: 35.149µs)","trace[1988407247] 'applied index is now lower than readState.Index'  (duration: 158.527242ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:57:29.447140Z","caller":"traceutil/trace.go:172","msg":"trace[632461307] transaction","detail":"{read_only:false; response_revision:1289; number_of_response:1; }","duration":"182.705626ms","start":"2025-11-01T09:57:29.264413Z","end":"2025-11-01T09:57:29.447119Z","steps":["trace[632461307] 'process raft request'  (duration: 22.01965ms)","trace[632461307] 'compare'  (duration: 159.896616ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:57:29.447311Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.777261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386\" limit:1 ","response":"range_response_count:1 size:2886"}
	{"level":"info","ts":"2025-11-01T09:57:29.447350Z","caller":"traceutil/trace.go:172","msg":"trace[19721979] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386; range_end:; response_count:1; response_revision:1289; }","duration":"158.830264ms","start":"2025-11-01T09:57:29.288512Z","end":"2025-11-01T09:57:29.447342Z","steps":["trace[19721979] 'agreement among raft nodes before linearized reading'  (duration: 158.65309ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:57:29.447468Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.170946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattributesclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:57:29.447521Z","caller":"traceutil/trace.go:172","msg":"trace[1709287909] range","detail":"{range_begin:/registry/volumeattributesclasses; range_end:; response_count:0; response_revision:1289; }","duration":"128.203483ms","start":"2025-11-01T09:57:29.319284Z","end":"2025-11-01T09:57:29.447487Z","steps":["trace[1709287909] 'agreement among raft nodes before linearized reading'  (duration: 128.136599ms)"],"step_count":1}
	
	
	==> gcp-auth [7e4b0adebd6e4b4f76a5187c833c9926bbd8fe14a0b790415afb0904d52a6614] <==
	2025/11/01 09:56:55 GCP Auth Webhook started!
	2025/11/01 09:57:18 Ready to marshal response ...
	2025/11/01 09:57:18 Ready to write response ...
	2025/11/01 09:57:18 Ready to marshal response ...
	2025/11/01 09:57:18 Ready to write response ...
	2025/11/01 09:57:18 Ready to marshal response ...
	2025/11/01 09:57:18 Ready to write response ...
	2025/11/01 09:57:29 Ready to marshal response ...
	2025/11/01 09:57:29 Ready to write response ...
	2025/11/01 09:57:29 Ready to marshal response ...
	2025/11/01 09:57:29 Ready to write response ...
	2025/11/01 09:57:38 Ready to marshal response ...
	2025/11/01 09:57:38 Ready to write response ...
	2025/11/01 09:57:39 Ready to marshal response ...
	2025/11/01 09:57:39 Ready to write response ...
	2025/11/01 09:57:39 Ready to marshal response ...
	2025/11/01 09:57:39 Ready to write response ...
	2025/11/01 09:57:54 Ready to marshal response ...
	2025/11/01 09:57:54 Ready to write response ...
	2025/11/01 09:58:09 Ready to marshal response ...
	2025/11/01 09:58:09 Ready to write response ...
	2025/11/01 10:00:05 Ready to marshal response ...
	2025/11/01 10:00:05 Ready to write response ...
	
	
	==> kernel <==
	 10:00:07 up  1:42,  0 user,  load average: 0.27, 1.27, 1.84
	Linux addons-407417 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab] <==
	I1101 09:58:01.262807       1 main.go:301] handling current node
	I1101 09:58:11.262698       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:58:11.262733       1 main.go:301] handling current node
	I1101 09:58:21.263554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:58:21.263591       1 main.go:301] handling current node
	I1101 09:58:31.262433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:58:31.262486       1 main.go:301] handling current node
	I1101 09:58:41.269842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:58:41.269881       1 main.go:301] handling current node
	I1101 09:58:51.271003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:58:51.271034       1 main.go:301] handling current node
	I1101 09:59:01.262645       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:59:01.262683       1 main.go:301] handling current node
	I1101 09:59:11.269616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:59:11.269648       1 main.go:301] handling current node
	I1101 09:59:21.271721       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:59:21.271758       1 main.go:301] handling current node
	I1101 09:59:31.262451       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:59:31.262478       1 main.go:301] handling current node
	I1101 09:59:41.262484       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:59:41.262564       1 main.go:301] handling current node
	I1101 09:59:51.264557       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:59:51.264596       1 main.go:301] handling current node
	I1101 10:00:01.271652       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:00:01.271684       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2] <==
	W1101 09:55:59.280186       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:56:11.586596       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.5.230:443: connect: connection refused
	W1101 09:56:11.586597       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.5.230:443: connect: connection refused
	E1101 09:56:11.586686       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.5.230:443: connect: connection refused" logger="UnhandledError"
	E1101 09:56:11.586708       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.5.230:443: connect: connection refused" logger="UnhandledError"
	W1101 09:56:11.604238       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.5.230:443: connect: connection refused
	E1101 09:56:11.604290       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.5.230:443: connect: connection refused" logger="UnhandledError"
	W1101 09:56:11.612021       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.5.230:443: connect: connection refused
	E1101 09:56:11.612054       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.5.230:443: connect: connection refused" logger="UnhandledError"
	W1101 09:56:14.864529       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:56:14.864605       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:56:14.864649       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.42.97:443: connect: connection refused" logger="UnhandledError"
	E1101 09:56:14.866542       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.42.97:443: connect: connection refused" logger="UnhandledError"
	E1101 09:56:14.872403       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.42.97:443: connect: connection refused" logger="UnhandledError"
	E1101 09:56:14.893729       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.42.97:443: connect: connection refused" logger="UnhandledError"
	I1101 09:56:14.973146       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:57:28.531215       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40350: use of closed network connection
	E1101 09:57:28.674920       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40372: use of closed network connection
	I1101 09:57:39.290586       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:57:39.485329       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.19.33"}
	I1101 09:58:05.159615       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1101 10:00:06.063407       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.9.39"}
	
	
	==> kube-controller-manager [4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9] <==
	I1101 09:55:29.236623       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:55:29.236629       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:55:29.236852       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:55:29.236900       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:55:29.236995       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:55:29.237108       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:55:29.238056       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:55:29.238074       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:55:29.238215       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:55:29.238231       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:55:29.239436       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:55:29.241749       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:55:29.243940       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:55:29.245025       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:55:29.250297       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:55:29.255541       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:55:29.257795       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 09:55:59.249166       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:55:59.249371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:55:59.249435       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:55:59.265299       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:55:59.269021       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:55:59.350009       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:55:59.369388       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:56:14.173525       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265] <==
	I1101 09:55:30.861019       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:55:31.351457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:55:31.452438       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:55:31.452596       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:55:31.452767       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:55:31.531157       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:55:31.531266       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:55:31.562518       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:55:31.574702       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:55:31.577442       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:55:31.582016       1 config.go:200] "Starting service config controller"
	I1101 09:55:31.582041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:55:31.582149       1 config.go:309] "Starting node config controller"
	I1101 09:55:31.582227       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:55:31.582269       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:55:31.582854       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:55:31.583014       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:55:31.582963       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:55:31.583109       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:55:31.682183       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:55:31.683525       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:55:31.683633       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45] <==
	E1101 09:55:22.263289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:55:22.263299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:55:22.263343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:55:22.263419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:55:22.263363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:55:22.263375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:55:22.263419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:55:22.263563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:55:22.263628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:55:22.263729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:55:22.263735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:55:22.263823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:55:23.071624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:55:23.123656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:55:23.272618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:55:23.277551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:55:23.290318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:55:23.323405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:55:23.345452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:55:23.347388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:55:23.389785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:55:23.416018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:55:23.479472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:55:23.570693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:55:26.260234       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.066052    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^453e0466-b709-11f0-9262-6e6d2ce7c6d0\") pod \"37cacfa5-414b-4f2b-bb12-d52ba9e401d0\" (UID: \"37cacfa5-414b-4f2b-bb12-d52ba9e401d0\") "
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.066156    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/37cacfa5-414b-4f2b-bb12-d52ba9e401d0-gcp-creds\") pod \"37cacfa5-414b-4f2b-bb12-d52ba9e401d0\" (UID: \"37cacfa5-414b-4f2b-bb12-d52ba9e401d0\") "
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.066192    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s84nk\" (UniqueName: \"kubernetes.io/projected/37cacfa5-414b-4f2b-bb12-d52ba9e401d0-kube-api-access-s84nk\") pod \"37cacfa5-414b-4f2b-bb12-d52ba9e401d0\" (UID: \"37cacfa5-414b-4f2b-bb12-d52ba9e401d0\") "
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.066258    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37cacfa5-414b-4f2b-bb12-d52ba9e401d0-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "37cacfa5-414b-4f2b-bb12-d52ba9e401d0" (UID: "37cacfa5-414b-4f2b-bb12-d52ba9e401d0"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.066373    1307 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/37cacfa5-414b-4f2b-bb12-d52ba9e401d0-gcp-creds\") on node \"addons-407417\" DevicePath \"\""
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.068560    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37cacfa5-414b-4f2b-bb12-d52ba9e401d0-kube-api-access-s84nk" (OuterVolumeSpecName: "kube-api-access-s84nk") pod "37cacfa5-414b-4f2b-bb12-d52ba9e401d0" (UID: "37cacfa5-414b-4f2b-bb12-d52ba9e401d0"). InnerVolumeSpecName "kube-api-access-s84nk". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.069160    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^453e0466-b709-11f0-9262-6e6d2ce7c6d0" (OuterVolumeSpecName: "task-pv-storage") pod "37cacfa5-414b-4f2b-bb12-d52ba9e401d0" (UID: "37cacfa5-414b-4f2b-bb12-d52ba9e401d0"). InnerVolumeSpecName "pvc-f3b03a94-2953-46b4-b250-3f36197f977e". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.167174    1307 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s84nk\" (UniqueName: \"kubernetes.io/projected/37cacfa5-414b-4f2b-bb12-d52ba9e401d0-kube-api-access-s84nk\") on node \"addons-407417\" DevicePath \"\""
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.167236    1307 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-f3b03a94-2953-46b4-b250-3f36197f977e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^453e0466-b709-11f0-9262-6e6d2ce7c6d0\") on node \"addons-407417\" "
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.173990    1307 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-f3b03a94-2953-46b4-b250-3f36197f977e" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^453e0466-b709-11f0-9262-6e6d2ce7c6d0") on node "addons-407417"
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.268599    1307 reconciler_common.go:299] "Volume detached for volume \"pvc-f3b03a94-2953-46b4-b250-3f36197f977e\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^453e0466-b709-11f0-9262-6e6d2ce7c6d0\") on node \"addons-407417\" DevicePath \"\""
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.386154    1307 scope.go:117] "RemoveContainer" containerID="1d8d5a934ecc1d3113e9b7401d38a10d4e84e6a192459f0985b0b82b1ce8b072"
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.395996    1307 scope.go:117] "RemoveContainer" containerID="1d8d5a934ecc1d3113e9b7401d38a10d4e84e6a192459f0985b0b82b1ce8b072"
	Nov 01 09:58:19 addons-407417 kubelet[1307]: E1101 09:58:19.396401    1307 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d8d5a934ecc1d3113e9b7401d38a10d4e84e6a192459f0985b0b82b1ce8b072\": container with ID starting with 1d8d5a934ecc1d3113e9b7401d38a10d4e84e6a192459f0985b0b82b1ce8b072 not found: ID does not exist" containerID="1d8d5a934ecc1d3113e9b7401d38a10d4e84e6a192459f0985b0b82b1ce8b072"
	Nov 01 09:58:19 addons-407417 kubelet[1307]: I1101 09:58:19.396443    1307 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8d5a934ecc1d3113e9b7401d38a10d4e84e6a192459f0985b0b82b1ce8b072"} err="failed to get container status \"1d8d5a934ecc1d3113e9b7401d38a10d4e84e6a192459f0985b0b82b1ce8b072\": rpc error: code = NotFound desc = could not find container \"1d8d5a934ecc1d3113e9b7401d38a10d4e84e6a192459f0985b0b82b1ce8b072\": container with ID starting with 1d8d5a934ecc1d3113e9b7401d38a10d4e84e6a192459f0985b0b82b1ce8b072 not found: ID does not exist"
	Nov 01 09:58:20 addons-407417 kubelet[1307]: I1101 09:58:20.692039    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37cacfa5-414b-4f2b-bb12-d52ba9e401d0" path="/var/lib/kubelet/pods/37cacfa5-414b-4f2b-bb12-d52ba9e401d0/volumes"
	Nov 01 09:58:24 addons-407417 kubelet[1307]: I1101 09:58:24.709970    1307 scope.go:117] "RemoveContainer" containerID="c771b9cc542d255f676dcd87264bcf1e9821f21671b808e145124789a8a7886a"
	Nov 01 09:58:24 addons-407417 kubelet[1307]: I1101 09:58:24.717698    1307 scope.go:117] "RemoveContainer" containerID="d33457a100773676280f730519051f3218f9b24be987c8be29bb6c3f37046598"
	Nov 01 09:58:24 addons-407417 kubelet[1307]: I1101 09:58:24.726154    1307 scope.go:117] "RemoveContainer" containerID="d5acaf647d032884d0e3e32aa77f72bfbe704b43820d3f1a7b91b06ba3511345"
	Nov 01 09:58:30 addons-407417 kubelet[1307]: I1101 09:58:30.443075    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-v2bwb" podStartSLOduration=177.80761434 podStartE2EDuration="2m59.443052523s" podCreationTimestamp="2025-11-01 09:55:31 +0000 UTC" firstStartedPulling="2025-11-01 09:58:27.713543125 +0000 UTC m=+183.105913148" lastFinishedPulling="2025-11-01 09:58:29.348981309 +0000 UTC m=+184.741351331" observedRunningTime="2025-11-01 09:58:30.442338368 +0000 UTC m=+185.834708399" watchObservedRunningTime="2025-11-01 09:58:30.443052523 +0000 UTC m=+185.835422554"
	Nov 01 09:59:09 addons-407417 kubelet[1307]: I1101 09:59:09.689806    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-f46dd" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:59:13 addons-407417 kubelet[1307]: I1101 09:59:13.689476    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-z5mvf" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:59:42 addons-407417 kubelet[1307]: I1101 09:59:42.689920    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cz772" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:00:06 addons-407417 kubelet[1307]: I1101 10:00:06.135467    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qrgx\" (UniqueName: \"kubernetes.io/projected/cd65b974-6325-4413-b939-01f963298726-kube-api-access-8qrgx\") pod \"hello-world-app-5d498dc89-wkbbx\" (UID: \"cd65b974-6325-4413-b939-01f963298726\") " pod="default/hello-world-app-5d498dc89-wkbbx"
	Nov 01 10:00:06 addons-407417 kubelet[1307]: I1101 10:00:06.135670    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cd65b974-6325-4413-b939-01f963298726-gcp-creds\") pod \"hello-world-app-5d498dc89-wkbbx\" (UID: \"cd65b974-6325-4413-b939-01f963298726\") " pod="default/hello-world-app-5d498dc89-wkbbx"
	
	
	==> storage-provisioner [c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1] <==
	W1101 09:59:43.010637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:45.013509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:45.018036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:47.020410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:47.023768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:49.026520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:49.030077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:51.032806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:51.037230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:53.040629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:53.044342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:55.047119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:55.051664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:57.054221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:57.058613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:59.061350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:59.064905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:01.067957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:01.072981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:03.076396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:03.080768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:05.083711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:05.088888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:07.091454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:07.096790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-407417 -n addons-407417
helpers_test.go:269: (dbg) Run:  kubectl --context addons-407417 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-wkbbx ingress-nginx-admission-create-hqmb4 ingress-nginx-admission-patch-ppfh2
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-407417 describe pod hello-world-app-5d498dc89-wkbbx ingress-nginx-admission-create-hqmb4 ingress-nginx-admission-patch-ppfh2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-407417 describe pod hello-world-app-5d498dc89-wkbbx ingress-nginx-admission-create-hqmb4 ingress-nginx-admission-patch-ppfh2: exit status 1 (69.082636ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-wkbbx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-407417/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 10:00:05 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8qrgx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8qrgx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-wkbbx to addons-407417
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.454s (1.454s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hqmb4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ppfh2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-407417 describe pod hello-world-app-5d498dc89-wkbbx ingress-nginx-admission-create-hqmb4 ingress-nginx-admission-patch-ppfh2: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (253.798752ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:00:08.598556   77730 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:00:08.598837   77730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:00:08.598848   77730 out.go:374] Setting ErrFile to fd 2...
	I1101 10:00:08.598854   77730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:00:08.599090   77730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:00:08.599398   77730 mustload.go:66] Loading cluster: addons-407417
	I1101 10:00:08.599787   77730 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:00:08.599807   77730 addons.go:607] checking whether the cluster is paused
	I1101 10:00:08.599909   77730 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:00:08.599933   77730 host.go:66] Checking if "addons-407417" exists ...
	I1101 10:00:08.600354   77730 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 10:00:08.618324   77730 ssh_runner.go:195] Run: systemctl --version
	I1101 10:00:08.618394   77730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 10:00:08.637541   77730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 10:00:08.737587   77730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:00:08.737669   77730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:00:08.768238   77730 cri.go:89] found id: "5ed5dd2937c65328e07c15a270f45c1de256fc5ec7e72e7b9515183b574a7a6e"
	I1101 10:00:08.768267   77730 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 10:00:08.768271   77730 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 10:00:08.768274   77730 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 10:00:08.768277   77730 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 10:00:08.768281   77730 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 10:00:08.768283   77730 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 10:00:08.768286   77730 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 10:00:08.768288   77730 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 10:00:08.768298   77730 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 10:00:08.768301   77730 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 10:00:08.768304   77730 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 10:00:08.768306   77730 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 10:00:08.768309   77730 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 10:00:08.768312   77730 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 10:00:08.768318   77730 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 10:00:08.768320   77730 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 10:00:08.768324   77730 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 10:00:08.768326   77730 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 10:00:08.768328   77730 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 10:00:08.768331   77730 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 10:00:08.768333   77730 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 10:00:08.768335   77730 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 10:00:08.768337   77730 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 10:00:08.768340   77730 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 10:00:08.768342   77730 cri.go:89] found id: ""
	I1101 10:00:08.768393   77730 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:00:08.783242   77730 out.go:203] 
	W1101 10:00:08.784484   77730 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:00:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:00:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:00:08.784527   77730 out.go:285] * 
	* 
	W1101 10:00:08.788732   77730 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:00:08.790250   77730 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable ingress --alsologtostderr -v=1: exit status 11 (257.113823ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:00:08.857843   77790 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:00:08.858097   77790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:00:08.858107   77790 out.go:374] Setting ErrFile to fd 2...
	I1101 10:00:08.858111   77790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:00:08.858320   77790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:00:08.858615   77790 mustload.go:66] Loading cluster: addons-407417
	I1101 10:00:08.858970   77790 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:00:08.858984   77790 addons.go:607] checking whether the cluster is paused
	I1101 10:00:08.859062   77790 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:00:08.859077   77790 host.go:66] Checking if "addons-407417" exists ...
	I1101 10:00:08.859464   77790 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 10:00:08.876782   77790 ssh_runner.go:195] Run: systemctl --version
	I1101 10:00:08.876839   77790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 10:00:08.893894   77790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 10:00:08.994682   77790 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:00:08.994772   77790 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:00:09.025791   77790 cri.go:89] found id: "5ed5dd2937c65328e07c15a270f45c1de256fc5ec7e72e7b9515183b574a7a6e"
	I1101 10:00:09.025815   77790 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 10:00:09.025819   77790 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 10:00:09.025829   77790 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 10:00:09.025832   77790 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 10:00:09.025836   77790 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 10:00:09.025838   77790 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 10:00:09.025842   77790 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 10:00:09.025847   77790 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 10:00:09.025855   77790 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 10:00:09.025859   77790 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 10:00:09.025864   77790 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 10:00:09.025869   77790 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 10:00:09.025877   77790 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 10:00:09.025881   77790 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 10:00:09.025892   77790 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 10:00:09.025897   77790 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 10:00:09.025901   77790 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 10:00:09.025903   77790 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 10:00:09.025906   77790 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 10:00:09.025908   77790 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 10:00:09.025910   77790 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 10:00:09.025912   77790 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 10:00:09.025920   77790 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 10:00:09.025922   77790 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 10:00:09.025925   77790 cri.go:89] found id: ""
	I1101 10:00:09.025988   77790 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:00:09.041945   77790 out.go:203] 
	W1101 10:00:09.043608   77790 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:00:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:00:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:00:09.043632   77790 out.go:285] * 
	* 
	W1101 10:00:09.047670   77790 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:00:09.048932   77790 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (150.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-swsl2" [9d83b470-69f3-4e61-bb47-673876b3da4e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003145752s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (243.505061ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:44.598263   74194 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:44.598528   74194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:44.598539   74194 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:44.598545   74194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:44.598760   74194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:44.599041   74194 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:44.599402   74194 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:44.599420   74194 addons.go:607] checking whether the cluster is paused
	I1101 09:57:44.599543   74194 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:44.599567   74194 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:44.599955   74194 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:44.616565   74194 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:44.616606   74194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:44.632806   74194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:44.733290   74194 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:44.733385   74194 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:44.761835   74194 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:44.761859   74194 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:44.761863   74194 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:44.761866   74194 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:44.761868   74194 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:44.761871   74194 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:44.761873   74194 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:44.761876   74194 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:44.761878   74194 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:44.761883   74194 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:44.761886   74194 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:44.761888   74194 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:44.761890   74194 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:44.761893   74194 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:44.761895   74194 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:44.761898   74194 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:44.761901   74194 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:44.761904   74194 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:44.761906   74194 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:44.761909   74194 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:44.761911   74194 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:44.761913   74194 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:44.761916   74194 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:44.761918   74194 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:44.761920   74194 cri.go:89] found id: ""
	I1101 09:57:44.761957   74194 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:44.776260   74194 out.go:203] 
	W1101 09:57:44.777545   74194 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:44.777570   74194 out.go:285] * 
	* 
	W1101 09:57:44.781470   74194 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:44.782823   74194 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.239334ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.026268684s
addons_test.go:463: (dbg) Run:  kubectl --context addons-407417 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (274.503267ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:39.323854   73056 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:39.324380   73056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:39.324399   73056 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:39.324406   73056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:39.324787   73056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:39.325169   73056 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:39.325691   73056 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:39.325718   73056 addons.go:607] checking whether the cluster is paused
	I1101 09:57:39.325856   73056 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:39.325883   73056 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:39.326483   73056 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:39.345370   73056 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:39.345427   73056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:39.365016   73056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:39.475878   73056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:39.475975   73056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:39.513182   73056 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:39.513202   73056 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:39.513206   73056 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:39.513209   73056 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:39.513212   73056 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:39.513216   73056 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:39.513218   73056 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:39.513221   73056 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:39.513223   73056 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:39.513232   73056 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:39.513235   73056 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:39.513237   73056 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:39.513240   73056 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:39.513242   73056 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:39.513244   73056 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:39.513251   73056 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:39.513258   73056 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:39.513262   73056 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:39.513264   73056 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:39.513266   73056 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:39.513268   73056 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:39.513271   73056 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:39.513273   73056 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:39.513276   73056 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:39.513278   73056 cri.go:89] found id: ""
	I1101 09:57:39.513336   73056 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:39.527515   73056 out.go:203] 
	W1101 09:57:39.528692   73056 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:39.528715   73056 out.go:285] * 
	* 
	W1101 09:57:39.533059   73056 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:39.534240   73056 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 09:57:36.813961   61522 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 09:57:36.817155   61522 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 09:57:36.817182   61522 kapi.go:107] duration metric: took 3.247808ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.261586ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-407417 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-407417 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [02ab5ebb-e0e0-49a7-b5ef-173821062add] Pending
helpers_test.go:352: "task-pv-pod" [02ab5ebb-e0e0-49a7-b5ef-173821062add] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [02ab5ebb-e0e0-49a7-b5ef-173821062add] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004109078s
addons_test.go:572: (dbg) Run:  kubectl --context addons-407417 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-407417 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-407417 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-407417 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-407417 delete pod task-pv-pod: (1.1268741s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-407417 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-407417 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-407417 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [37cacfa5-414b-4f2b-bb12-d52ba9e401d0] Pending
helpers_test.go:352: "task-pv-pod-restore" [37cacfa5-414b-4f2b-bb12-d52ba9e401d0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [37cacfa5-414b-4f2b-bb12-d52ba9e401d0] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004222465s
addons_test.go:614: (dbg) Run:  kubectl --context addons-407417 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-407417 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-407417 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (241.16785ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:58:19.777548   75447 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:58:19.777810   75447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:19.777820   75447 out.go:374] Setting ErrFile to fd 2...
	I1101 09:58:19.777824   75447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:19.778034   75447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:58:19.778299   75447 mustload.go:66] Loading cluster: addons-407417
	I1101 09:58:19.778623   75447 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:19.778638   75447 addons.go:607] checking whether the cluster is paused
	I1101 09:58:19.778719   75447 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:19.778733   75447 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:58:19.779103   75447 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:58:19.796437   75447 ssh_runner.go:195] Run: systemctl --version
	I1101 09:58:19.796485   75447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:58:19.812946   75447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:58:19.911007   75447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:58:19.911096   75447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:58:19.940522   75447 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:58:19.940547   75447 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:58:19.940553   75447 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:58:19.940558   75447 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:58:19.940562   75447 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:58:19.940567   75447 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:58:19.940572   75447 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:58:19.940576   75447 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:58:19.940580   75447 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:58:19.940603   75447 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:58:19.940613   75447 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:58:19.940617   75447 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:58:19.940621   75447 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:58:19.940625   75447 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:58:19.940629   75447 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:58:19.940639   75447 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:58:19.940643   75447 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:58:19.940649   75447 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:58:19.940653   75447 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:58:19.940657   75447 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:58:19.940661   75447 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:58:19.940665   75447 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:58:19.940668   75447 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:58:19.940670   75447 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:58:19.940672   75447 cri.go:89] found id: ""
	I1101 09:58:19.940714   75447 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:58:19.954453   75447 out.go:203] 
	W1101 09:58:19.955624   75447 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:58:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:58:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:58:19.955642   75447 out.go:285] * 
	* 
	W1101 09:58:19.959730   75447 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:58:19.961019   75447 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (243.301961ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:58:20.020828   75507 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:58:20.020960   75507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:20.020970   75507 out.go:374] Setting ErrFile to fd 2...
	I1101 09:58:20.020974   75507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:58:20.021198   75507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:58:20.021483   75507 mustload.go:66] Loading cluster: addons-407417
	I1101 09:58:20.021900   75507 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:20.021919   75507 addons.go:607] checking whether the cluster is paused
	I1101 09:58:20.022020   75507 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:58:20.022044   75507 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:58:20.022461   75507 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:58:20.039063   75507 ssh_runner.go:195] Run: systemctl --version
	I1101 09:58:20.039120   75507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:58:20.055366   75507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:58:20.153082   75507 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:58:20.153196   75507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:58:20.182489   75507 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:58:20.182523   75507 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:58:20.182528   75507 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:58:20.182533   75507 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:58:20.182537   75507 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:58:20.182542   75507 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:58:20.182546   75507 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:58:20.182550   75507 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:58:20.182555   75507 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:58:20.182567   75507 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:58:20.182572   75507 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:58:20.182574   75507 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:58:20.182576   75507 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:58:20.182579   75507 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:58:20.182581   75507 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:58:20.182592   75507 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:58:20.182598   75507 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:58:20.182601   75507 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:58:20.182603   75507 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:58:20.182606   75507 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:58:20.182610   75507 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:58:20.182613   75507 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:58:20.182615   75507 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:58:20.182617   75507 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:58:20.182619   75507 cri.go:89] found id: ""
	I1101 09:58:20.182656   75507 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:58:20.198430   75507 out.go:203] 
	W1101 09:58:20.199640   75507 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:58:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:58:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:58:20.199658   75507 out.go:285] * 
	* 
	W1101 09:58:20.203567   75507 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:58:20.204672   75507 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (43.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-407417 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-407417 --alsologtostderr -v=1: exit status 11 (259.959041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:28.978357   71548 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:28.978628   71548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:28.978638   71548 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:28.978642   71548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:28.978925   71548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:28.979279   71548 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:28.979831   71548 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:28.979849   71548 addons.go:607] checking whether the cluster is paused
	I1101 09:57:28.979935   71548 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:28.979951   71548 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:28.980296   71548 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:28.998370   71548 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:28.998410   71548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:29.017965   71548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:29.117119   71548 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:29.117195   71548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:29.146839   71548 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:29.146857   71548 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:29.146860   71548 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:29.146863   71548 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:29.146866   71548 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:29.146868   71548 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:29.146871   71548 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:29.146873   71548 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:29.146876   71548 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:29.146880   71548 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:29.146883   71548 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:29.146886   71548 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:29.146889   71548 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:29.146891   71548 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:29.146893   71548 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:29.146899   71548 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:29.146902   71548 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:29.146906   71548 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:29.146908   71548 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:29.146910   71548 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:29.146913   71548 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:29.146915   71548 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:29.146917   71548 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:29.146920   71548 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:29.146923   71548 cri.go:89] found id: ""
	I1101 09:57:29.146958   71548 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:29.162598   71548 out.go:203] 
	W1101 09:57:29.164015   71548 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:29.164044   71548 out.go:285] * 
	* 
	W1101 09:57:29.170544   71548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:29.171954   71548 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-407417 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-407417
helpers_test.go:243: (dbg) docker inspect addons-407417:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d",
	        "Created": "2025-11-01T09:55:07.745378868Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 63564,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:55:07.778921511Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d/hosts",
	        "LogPath": "/var/lib/docker/containers/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d/ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d-json.log",
	        "Name": "/addons-407417",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-407417:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-407417",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed24700313829089657b30ec6f486e8b83c999069fe891a9ab2fd8c3d42b808d",
	                "LowerDir": "/var/lib/docker/overlay2/291d0f4817314d287a487f35ac3897afc0ecc7fe87b00b9144bd88abe3c60b06-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/291d0f4817314d287a487f35ac3897afc0ecc7fe87b00b9144bd88abe3c60b06/merged",
	                "UpperDir": "/var/lib/docker/overlay2/291d0f4817314d287a487f35ac3897afc0ecc7fe87b00b9144bd88abe3c60b06/diff",
	                "WorkDir": "/var/lib/docker/overlay2/291d0f4817314d287a487f35ac3897afc0ecc7fe87b00b9144bd88abe3c60b06/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-407417",
	                "Source": "/var/lib/docker/volumes/addons-407417/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-407417",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-407417",
	                "name.minikube.sigs.k8s.io": "addons-407417",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85c889e1490ed7288b2eebae0ef1c6b5e3585f156ba757532f98e9a94ab85cdb",
	            "SandboxKey": "/var/run/docker/netns/85c889e1490e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-407417": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:c6:88:98:aa:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1118a8e501685a515edaa4b953c5701ac59aa6d5e4c88c554f16e9c1e729e89a",
	                    "EndpointID": "3f13917744a420456b082e7060e56442c956eb32e5d032775bda172d5bb372e7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-407417",
	                        "ed2470031382"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-407417 -n addons-407417
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-407417 logs -n 25: (1.182377489s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-606362 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-606362   │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ delete  │ -p download-only-606362                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-606362   │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ start   │ -o=json --download-only -p download-only-712511 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-712511   │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ delete  │ -p download-only-712511                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-712511   │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ delete  │ -p download-only-606362                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-606362   │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ delete  │ -p download-only-712511                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-712511   │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ start   │ --download-only -p download-docker-647034 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-647034 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ delete  │ -p download-docker-647034                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-647034 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ start   │ --download-only -p binary-mirror-031328 --alsologtostderr --binary-mirror http://127.0.0.1:41345 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-031328   │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ delete  │ -p binary-mirror-031328                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-031328   │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ addons  │ enable dashboard -p addons-407417                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-407417          │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ addons  │ disable dashboard -p addons-407417                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-407417          │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ start   │ -p addons-407417 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-407417          │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:57 UTC │
	│ addons  │ addons-407417 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-407417          │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ addons-407417 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-407417          │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	│ addons  │ enable headlamp -p addons-407417 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-407417          │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:54:47
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:54:47.664744   62924 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:54:47.664990   62924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:47.664999   62924 out.go:374] Setting ErrFile to fd 2...
	I1101 09:54:47.665003   62924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:47.665213   62924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:54:47.665733   62924 out.go:368] Setting JSON to false
	I1101 09:54:47.666517   62924 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5828,"bootTime":1761985060,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:54:47.666599   62924 start.go:143] virtualization: kvm guest
	I1101 09:54:47.668301   62924 out.go:179] * [addons-407417] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:54:47.669336   62924 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:54:47.669355   62924 notify.go:221] Checking for updates...
	I1101 09:54:47.671457   62924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:54:47.672525   62924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 09:54:47.673590   62924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 09:54:47.674609   62924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:54:47.675656   62924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:54:47.676874   62924 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:54:47.697989   62924 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:54:47.698135   62924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:54:47.754235   62924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-01 09:54:47.745337495 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:54:47.754344   62924 docker.go:319] overlay module found
	I1101 09:54:47.755751   62924 out.go:179] * Using the docker driver based on user configuration
	I1101 09:54:47.756712   62924 start.go:309] selected driver: docker
	I1101 09:54:47.756728   62924 start.go:930] validating driver "docker" against <nil>
	I1101 09:54:47.756739   62924 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:54:47.757256   62924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:54:47.816269   62924 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-11-01 09:54:47.806123113 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:54:47.816410   62924 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:54:47.816639   62924 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:54:47.818154   62924 out.go:179] * Using Docker driver with root privileges
	I1101 09:54:47.819109   62924 cni.go:84] Creating CNI manager for ""
	I1101 09:54:47.819174   62924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:54:47.819186   62924 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:54:47.819247   62924 start.go:353] cluster config:
	{Name:addons-407417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 09:54:47.820367   62924 out.go:179] * Starting "addons-407417" primary control-plane node in "addons-407417" cluster
	I1101 09:54:47.821370   62924 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:54:47.822316   62924 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:54:47.823280   62924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:54:47.823308   62924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:54:47.823312   62924 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:54:47.823406   62924 cache.go:59] Caching tarball of preloaded images
	I1101 09:54:47.823517   62924 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:54:47.823530   62924 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:54:47.823847   62924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/config.json ...
	I1101 09:54:47.823869   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/config.json: {Name:mk11a6cb83771ab7cf7d8557dde1fee66bcc7743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:54:47.838821   62924 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:54:47.838922   62924 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:54:47.838938   62924 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:54:47.838942   62924 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:54:47.838952   62924 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:54:47.838959   62924 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 09:55:00.692543   62924 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 09:55:00.692595   62924 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:55:00.692644   62924 start.go:360] acquireMachinesLock for addons-407417: {Name:mk47dbd797c97fe05e1b91d4d97e970ae666c44c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:55:00.692747   62924 start.go:364] duration metric: took 80.733µs to acquireMachinesLock for "addons-407417"
	I1101 09:55:00.692771   62924 start.go:93] Provisioning new machine with config: &{Name:addons-407417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:55:00.692849   62924 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:55:00.694609   62924 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 09:55:00.694848   62924 start.go:159] libmachine.API.Create for "addons-407417" (driver="docker")
	I1101 09:55:00.694881   62924 client.go:173] LocalClient.Create starting
	I1101 09:55:00.695000   62924 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem
	I1101 09:55:00.782654   62924 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem
	I1101 09:55:00.873546   62924 cli_runner.go:164] Run: docker network inspect addons-407417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:55:00.889840   62924 cli_runner.go:211] docker network inspect addons-407417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:55:00.889921   62924 network_create.go:284] running [docker network inspect addons-407417] to gather additional debugging logs...
	I1101 09:55:00.889943   62924 cli_runner.go:164] Run: docker network inspect addons-407417
	W1101 09:55:00.906087   62924 cli_runner.go:211] docker network inspect addons-407417 returned with exit code 1
	I1101 09:55:00.906116   62924 network_create.go:287] error running [docker network inspect addons-407417]: docker network inspect addons-407417: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-407417 not found
	I1101 09:55:00.906142   62924 network_create.go:289] output of [docker network inspect addons-407417]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-407417 not found
	
	** /stderr **
	I1101 09:55:00.906266   62924 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:55:00.922920   62924 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86370}
	I1101 09:55:00.922960   62924 network_create.go:124] attempt to create docker network addons-407417 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 09:55:00.923006   62924 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-407417 addons-407417
	I1101 09:55:00.980988   62924 network_create.go:108] docker network addons-407417 192.168.49.0/24 created
	I1101 09:55:00.981022   62924 kic.go:121] calculated static IP "192.168.49.2" for the "addons-407417" container
	I1101 09:55:00.981078   62924 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:55:00.995618   62924 cli_runner.go:164] Run: docker volume create addons-407417 --label name.minikube.sigs.k8s.io=addons-407417 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:55:01.013315   62924 oci.go:103] Successfully created a docker volume addons-407417
	I1101 09:55:01.013403   62924 cli_runner.go:164] Run: docker run --rm --name addons-407417-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-407417 --entrypoint /usr/bin/test -v addons-407417:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:55:03.403008   62924 cli_runner.go:217] Completed: docker run --rm --name addons-407417-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-407417 --entrypoint /usr/bin/test -v addons-407417:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.38956461s)
	I1101 09:55:03.403038   62924 oci.go:107] Successfully prepared a docker volume addons-407417
	I1101 09:55:03.403078   62924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:55:03.403103   62924 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:55:03.403162   62924 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-407417:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 09:55:07.671369   62924 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-407417:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.26814786s)
	I1101 09:55:07.671402   62924 kic.go:203] duration metric: took 4.268295299s to extract preloaded images to volume ...
	W1101 09:55:07.671522   62924 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:55:07.671568   62924 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:55:07.671609   62924 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:55:07.729582   62924 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-407417 --name addons-407417 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-407417 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-407417 --network addons-407417 --ip 192.168.49.2 --volume addons-407417:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:55:08.014648   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Running}}
	I1101 09:55:08.032880   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:08.050839   62924 cli_runner.go:164] Run: docker exec addons-407417 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:55:08.099536   62924 oci.go:144] the created container "addons-407417" has a running status.
	I1101 09:55:08.099575   62924 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa...
	I1101 09:55:08.144791   62924 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:55:08.171924   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:08.189746   62924 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:55:08.189770   62924 kic_runner.go:114] Args: [docker exec --privileged addons-407417 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:55:08.228169   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:08.249664   62924 machine.go:94] provisionDockerMachine start ...
	I1101 09:55:08.249798   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:08.271342   62924 main.go:143] libmachine: Using SSH client type: native
	I1101 09:55:08.271607   62924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 09:55:08.271622   62924 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:55:08.272258   62924 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46100->127.0.0.1:32768: read: connection reset by peer
	I1101 09:55:11.414783   62924 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-407417
	
	I1101 09:55:11.414820   62924 ubuntu.go:182] provisioning hostname "addons-407417"
	I1101 09:55:11.414915   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:11.432301   62924 main.go:143] libmachine: Using SSH client type: native
	I1101 09:55:11.432622   62924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 09:55:11.432642   62924 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-407417 && echo "addons-407417" | sudo tee /etc/hostname
	I1101 09:55:11.581758   62924 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-407417
	
	I1101 09:55:11.581836   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:11.600536   62924 main.go:143] libmachine: Using SSH client type: native
	I1101 09:55:11.600749   62924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 09:55:11.600765   62924 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-407417' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-407417/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-407417' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:55:11.742580   62924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:55:11.742614   62924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 09:55:11.742666   62924 ubuntu.go:190] setting up certificates
	I1101 09:55:11.742683   62924 provision.go:84] configureAuth start
	I1101 09:55:11.742745   62924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-407417
	I1101 09:55:11.759741   62924 provision.go:143] copyHostCerts
	I1101 09:55:11.759820   62924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 09:55:11.759939   62924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 09:55:11.760060   62924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 09:55:11.760128   62924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.addons-407417 san=[127.0.0.1 192.168.49.2 addons-407417 localhost minikube]
	I1101 09:55:11.808391   62924 provision.go:177] copyRemoteCerts
	I1101 09:55:11.808457   62924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:55:11.808506   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:11.827287   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:11.928317   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:55:11.948338   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:55:11.966161   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:55:11.983903   62924 provision.go:87] duration metric: took 241.20484ms to configureAuth
	I1101 09:55:11.983936   62924 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:55:11.984121   62924 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:55:11.984224   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.000880   62924 main.go:143] libmachine: Using SSH client type: native
	I1101 09:55:12.001136   62924 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 09:55:12.001160   62924 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:55:12.254702   62924 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:55:12.254741   62924 machine.go:97] duration metric: took 4.005048324s to provisionDockerMachine
	I1101 09:55:12.254756   62924 client.go:176] duration metric: took 11.559866083s to LocalClient.Create
	I1101 09:55:12.254783   62924 start.go:167] duration metric: took 11.5599355s to libmachine.API.Create "addons-407417"
	I1101 09:55:12.254793   62924 start.go:293] postStartSetup for "addons-407417" (driver="docker")
	I1101 09:55:12.254806   62924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:55:12.254901   62924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:55:12.254957   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.271962   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:12.373654   62924 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:55:12.377056   62924 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:55:12.377085   62924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:55:12.377099   62924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 09:55:12.377166   62924 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 09:55:12.377200   62924 start.go:296] duration metric: took 122.400126ms for postStartSetup
	I1101 09:55:12.377535   62924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-407417
	I1101 09:55:12.393732   62924 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/config.json ...
	I1101 09:55:12.394011   62924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:55:12.394068   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.410056   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:12.506765   62924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:55:12.511141   62924 start.go:128] duration metric: took 11.81827633s to createHost
	I1101 09:55:12.511165   62924 start.go:83] releasing machines lock for "addons-407417", held for 11.818404827s
	I1101 09:55:12.511243   62924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-407417
	I1101 09:55:12.527840   62924 ssh_runner.go:195] Run: cat /version.json
	I1101 09:55:12.527875   62924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:55:12.527897   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.527958   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:12.545740   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:12.546238   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:12.696130   62924 ssh_runner.go:195] Run: systemctl --version
	I1101 09:55:12.702538   62924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:55:12.737439   62924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:55:12.742051   62924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:55:12.742123   62924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:55:12.766912   62924 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:55:12.766947   62924 start.go:496] detecting cgroup driver to use...
	I1101 09:55:12.766999   62924 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:55:12.767045   62924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:55:12.782903   62924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:55:12.795098   62924 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:55:12.795164   62924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:55:12.811455   62924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:55:12.829116   62924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:55:12.907553   62924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:55:12.991074   62924 docker.go:234] disabling docker service ...
	I1101 09:55:12.991134   62924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:55:13.009695   62924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:55:13.022320   62924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:55:13.106107   62924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:55:13.182043   62924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:55:13.194011   62924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:55:13.207392   62924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:55:13.207445   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.217335   62924 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:55:13.217401   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.225826   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.234095   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.242290   62924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:55:13.249934   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.258277   62924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.271670   62924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:55:13.280273   62924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:55:13.287481   62924 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 09:55:13.287554   62924 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 09:55:13.299428   62924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:55:13.306908   62924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:55:13.385935   62924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:55:13.489559   62924 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:55:13.489632   62924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:55:13.493507   62924 start.go:564] Will wait 60s for crictl version
	I1101 09:55:13.493559   62924 ssh_runner.go:195] Run: which crictl
	I1101 09:55:13.497104   62924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:55:13.521265   62924 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:55:13.521351   62924 ssh_runner.go:195] Run: crio --version
	I1101 09:55:13.548096   62924 ssh_runner.go:195] Run: crio --version
	I1101 09:55:13.576326   62924 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:55:13.577365   62924 cli_runner.go:164] Run: docker network inspect addons-407417 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:55:13.595055   62924 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 09:55:13.599188   62924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:55:13.609196   62924 kubeadm.go:884] updating cluster {Name:addons-407417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:55:13.609347   62924 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:55:13.609419   62924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:55:13.641215   62924 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:55:13.641236   62924 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:55:13.641295   62924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:55:13.666899   62924 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:55:13.666923   62924 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:55:13.666933   62924 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 09:55:13.667052   62924 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-407417 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:55:13.667116   62924 ssh_runner.go:195] Run: crio config
	I1101 09:55:13.711542   62924 cni.go:84] Creating CNI manager for ""
	I1101 09:55:13.711570   62924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:55:13.711587   62924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:55:13.711614   62924 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-407417 NodeName:addons-407417 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:55:13.711729   62924 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-407417"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:55:13.711786   62924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:55:13.719753   62924 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:55:13.719820   62924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:55:13.727570   62924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 09:55:13.739768   62924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:55:13.754478   62924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 09:55:13.766996   62924 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:55:13.771080   62924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:55:13.781068   62924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:55:13.856030   62924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:55:13.880005   62924 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417 for IP: 192.168.49.2
	I1101 09:55:13.880032   62924 certs.go:195] generating shared ca certs ...
	I1101 09:55:13.880055   62924 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:13.880204   62924 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 09:55:14.539567   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt ...
	I1101 09:55:14.539600   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt: {Name:mk702db875df4acab57078dae280f2b2a2f2d6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:14.539780   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key ...
	I1101 09:55:14.539793   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key: {Name:mk1e6252eae50628f5658754b8732e32c27dd8a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:14.539869   62924 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 09:55:14.618426   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt ...
	I1101 09:55:14.618455   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt: {Name:mk405201ebf4c9c1c06e402900eb7549fe0938be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:14.618653   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key ...
	I1101 09:55:14.618671   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key: {Name:mk3998e6fe53b349d815968bec1eef1bbde8c335 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:14.618773   62924 certs.go:257] generating profile certs ...
	I1101 09:55:14.618835   62924 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.key
	I1101 09:55:14.618849   62924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt with IP's: []
	I1101 09:55:15.345709   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt ...
	I1101 09:55:15.345754   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: {Name:mkfa8088a21e6394b2b26b7b5a36db558b623a85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:15.345987   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.key ...
	I1101 09:55:15.346008   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.key: {Name:mk4eaeec3d3bce07da7288b24b08eb60314781f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:15.346109   62924 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key.f2421868
	I1101 09:55:15.346128   62924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt.f2421868 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 09:55:15.872583   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt.f2421868 ...
	I1101 09:55:15.872618   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt.f2421868: {Name:mke93356eb21971205323e03b3e9302323daf519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:15.872799   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key.f2421868 ...
	I1101 09:55:15.872813   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key.f2421868: {Name:mk5faaa6d1a4d4cbf379ee73a264d97714d15761 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:15.872886   62924 certs.go:382] copying /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt.f2421868 -> /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt
	I1101 09:55:15.873007   62924 certs.go:386] copying /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key.f2421868 -> /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key
	I1101 09:55:15.873067   62924 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.key
	I1101 09:55:15.873086   62924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.crt with IP's: []
	I1101 09:55:16.062169   62924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.crt ...
	I1101 09:55:16.062205   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.crt: {Name:mke7a14da0c291b6679a01bd7d8fb523f64c90d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:16.062384   62924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.key ...
	I1101 09:55:16.062396   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.key: {Name:mk1d47f2c26e2a0e004ee7360e3a4ab78937f762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:16.062588   62924 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:55:16.062626   62924 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:55:16.062650   62924 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:55:16.062682   62924 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 09:55:16.063295   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:55:16.081705   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:55:16.099646   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:55:16.118332   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:55:16.137167   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:55:16.154829   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:55:16.172278   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:55:16.189397   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:55:16.206743   62924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:55:16.226114   62924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:55:16.238713   62924 ssh_runner.go:195] Run: openssl version
	I1101 09:55:16.244808   62924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:55:16.255948   62924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:55:16.259972   62924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:55:16.260039   62924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:55:16.294415   62924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:55:16.303537   62924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:55:16.307294   62924 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:55:16.307344   62924 kubeadm.go:401] StartCluster: {Name:addons-407417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-407417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:55:16.307411   62924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:55:16.307476   62924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:55:16.335569   62924 cri.go:89] found id: ""
	I1101 09:55:16.335630   62924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:55:16.343861   62924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:55:16.352115   62924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:55:16.352169   62924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:55:16.360142   62924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:55:16.360160   62924 kubeadm.go:158] found existing configuration files:
	
	I1101 09:55:16.360222   62924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:55:16.367977   62924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:55:16.368047   62924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:55:16.375443   62924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:55:16.383132   62924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:55:16.383194   62924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:55:16.390790   62924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:55:16.398343   62924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:55:16.398395   62924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:55:16.405734   62924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:55:16.413152   62924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:55:16.413215   62924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:55:16.420625   62924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:55:16.459366   62924 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:55:16.459479   62924 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:55:16.479978   62924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:55:16.480073   62924 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:55:16.480125   62924 kubeadm.go:319] OS: Linux
	I1101 09:55:16.480232   62924 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:55:16.480320   62924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:55:16.480406   62924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:55:16.480483   62924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:55:16.480570   62924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:55:16.480644   62924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:55:16.480721   62924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:55:16.480787   62924 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:55:16.540622   62924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:55:16.540819   62924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:55:16.540973   62924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:55:16.547809   62924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:55:16.550349   62924 out.go:252]   - Generating certificates and keys ...
	I1101 09:55:16.550431   62924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:55:16.550516   62924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:55:16.658354   62924 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:55:17.025451   62924 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:55:17.271071   62924 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:55:17.381663   62924 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:55:17.761923   62924 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:55:17.762072   62924 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-407417 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:55:17.873959   62924 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:55:17.874080   62924 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-407417 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 09:55:18.070211   62924 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:55:18.324120   62924 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:55:18.621648   62924 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:55:18.621754   62924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:55:18.710522   62924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:55:18.768529   62924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:55:18.835765   62924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:55:19.232946   62924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:55:19.439650   62924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:55:19.440006   62924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:55:19.443644   62924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:55:19.445126   62924 out.go:252]   - Booting up control plane ...
	I1101 09:55:19.445252   62924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:55:19.445352   62924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:55:19.446393   62924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:55:19.459369   62924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:55:19.459540   62924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:55:19.465419   62924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:55:19.465695   62924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:55:19.465789   62924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:55:19.561433   62924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:55:19.561597   62924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:55:20.563012   62924 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001637649s
	I1101 09:55:20.566876   62924 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:55:20.567028   62924 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 09:55:20.567175   62924 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:55:20.567335   62924 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:55:21.574881   62924 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.007964126s
	I1101 09:55:22.264739   62924 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.697829311s
	I1101 09:55:24.069015   62924 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502086111s
	I1101 09:55:24.080520   62924 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:55:24.089590   62924 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:55:24.097392   62924 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:55:24.097699   62924 kubeadm.go:319] [mark-control-plane] Marking the node addons-407417 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:55:24.104739   62924 kubeadm.go:319] [bootstrap-token] Using token: vt78av.szo12tr3p6vo9ys2
	I1101 09:55:24.106010   62924 out.go:252]   - Configuring RBAC rules ...
	I1101 09:55:24.106144   62924 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:55:24.109694   62924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:55:24.113909   62924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:55:24.116190   62924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:55:24.118149   62924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:55:24.121001   62924 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:55:24.475388   62924 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:55:24.890245   62924 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:55:25.474859   62924 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:55:25.475873   62924 kubeadm.go:319] 
	I1101 09:55:25.475980   62924 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:55:25.475995   62924 kubeadm.go:319] 
	I1101 09:55:25.476110   62924 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:55:25.476121   62924 kubeadm.go:319] 
	I1101 09:55:25.476161   62924 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:55:25.476255   62924 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:55:25.476336   62924 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:55:25.476343   62924 kubeadm.go:319] 
	I1101 09:55:25.476446   62924 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:55:25.476465   62924 kubeadm.go:319] 
	I1101 09:55:25.476561   62924 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:55:25.476578   62924 kubeadm.go:319] 
	I1101 09:55:25.476656   62924 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:55:25.476779   62924 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:55:25.476877   62924 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:55:25.476889   62924 kubeadm.go:319] 
	I1101 09:55:25.476990   62924 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:55:25.477107   62924 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:55:25.477118   62924 kubeadm.go:319] 
	I1101 09:55:25.477230   62924 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vt78av.szo12tr3p6vo9ys2 \
	I1101 09:55:25.477373   62924 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:940bb8e1f96ef3c88df818902bd8202f25d19108c9c93fa4896a1f509b4cfb64 \
	I1101 09:55:25.477405   62924 kubeadm.go:319] 	--control-plane 
	I1101 09:55:25.477415   62924 kubeadm.go:319] 
	I1101 09:55:25.477545   62924 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:55:25.477557   62924 kubeadm.go:319] 
	I1101 09:55:25.477660   62924 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vt78av.szo12tr3p6vo9ys2 \
	I1101 09:55:25.477801   62924 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:940bb8e1f96ef3c88df818902bd8202f25d19108c9c93fa4896a1f509b4cfb64 
	I1101 09:55:25.479043   62924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:55:25.479162   62924 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:55:25.479176   62924 cni.go:84] Creating CNI manager for ""
	I1101 09:55:25.479183   62924 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:55:25.480585   62924 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:55:25.481607   62924 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:55:25.486015   62924 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:55:25.486031   62924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:55:25.498482   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:55:25.690303   62924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:55:25.690431   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:25.690438   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-407417 minikube.k8s.io/updated_at=2025_11_01T09_55_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=addons-407417 minikube.k8s.io/primary=true
	I1101 09:55:25.699707   62924 ops.go:34] apiserver oom_adj: -16
	I1101 09:55:25.764741   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:26.265251   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:26.765111   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:27.265132   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:27.765676   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:28.265594   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:28.765817   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:29.265085   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:29.764973   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:30.265586   62924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:55:30.333601   62924 kubeadm.go:1114] duration metric: took 4.643265118s to wait for elevateKubeSystemPrivileges
	I1101 09:55:30.333650   62924 kubeadm.go:403] duration metric: took 14.02631068s to StartCluster
	I1101 09:55:30.333674   62924 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:30.333781   62924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 09:55:30.334887   62924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:55:30.335191   62924 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:55:30.335378   62924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:55:30.335566   62924 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:55:30.335711   62924 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:55:30.335793   62924 addons.go:70] Setting inspektor-gadget=true in profile "addons-407417"
	I1101 09:55:30.335791   62924 addons.go:70] Setting yakd=true in profile "addons-407417"
	I1101 09:55:30.335809   62924 addons.go:239] Setting addon inspektor-gadget=true in "addons-407417"
	I1101 09:55:30.335818   62924 addons.go:239] Setting addon yakd=true in "addons-407417"
	I1101 09:55:30.335824   62924 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-407417"
	I1101 09:55:30.335855   62924 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-407417"
	I1101 09:55:30.335864   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335840   62924 addons.go:70] Setting default-storageclass=true in profile "addons-407417"
	I1101 09:55:30.335872   62924 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-407417"
	I1101 09:55:30.335878   62924 addons.go:70] Setting registry-creds=true in profile "addons-407417"
	I1101 09:55:30.335893   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335906   62924 addons.go:239] Setting addon registry-creds=true in "addons-407417"
	I1101 09:55:30.336119   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335950   62924 addons.go:70] Setting cloud-spanner=true in profile "addons-407417"
	I1101 09:55:30.336221   62924 addons.go:239] Setting addon cloud-spanner=true in "addons-407417"
	I1101 09:55:30.336268   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335867   62924 addons.go:70] Setting metrics-server=true in profile "addons-407417"
	I1101 09:55:30.336307   62924 addons.go:239] Setting addon metrics-server=true in "addons-407417"
	I1101 09:55:30.336338   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.336711   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.336722   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.335975   62924 addons.go:70] Setting registry=true in profile "addons-407417"
	I1101 09:55:30.336817   62924 addons.go:239] Setting addon registry=true in "addons-407417"
	I1101 09:55:30.336847   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.336895   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.335984   62924 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-407417"
	I1101 09:55:30.336897   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.336714   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.336004   62924 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-407417"
	I1101 09:55:30.337071   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.335991   62924 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-407417"
	I1101 09:55:30.337292   62924 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-407417"
	I1101 09:55:30.337320   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.336018   62924 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-407417"
	I1101 09:55:30.337530   62924 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-407417"
	I1101 09:55:30.337593   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.338002   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.338013   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.338029   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.338954   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.336018   62924 addons.go:70] Setting volcano=true in profile "addons-407417"
	I1101 09:55:30.344300   62924 addons.go:239] Setting addon volcano=true in "addons-407417"
	I1101 09:55:30.344386   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.336027   62924 addons.go:70] Setting volumesnapshots=true in profile "addons-407417"
	I1101 09:55:30.336037   62924 addons.go:70] Setting ingress=true in profile "addons-407417"
	I1101 09:55:30.336042   62924 addons.go:70] Setting ingress-dns=true in profile "addons-407417"
	I1101 09:55:30.336050   62924 addons.go:70] Setting gcp-auth=true in profile "addons-407417"
	I1101 09:55:30.335981   62924 addons.go:70] Setting storage-provisioner=true in profile "addons-407417"
	I1101 09:55:30.337021   62924 out.go:179] * Verifying Kubernetes components...
	I1101 09:55:30.335906   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.344890   62924 addons.go:239] Setting addon volumesnapshots=true in "addons-407417"
	I1101 09:55:30.344908   62924 addons.go:239] Setting addon ingress=true in "addons-407417"
	I1101 09:55:30.345202   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.345279   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.344921   62924 addons.go:239] Setting addon ingress-dns=true in "addons-407417"
	I1101 09:55:30.345556   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.345803   62924 mustload.go:66] Loading cluster: addons-407417
	I1101 09:55:30.345961   62924 addons.go:239] Setting addon storage-provisioner=true in "addons-407417"
	I1101 09:55:30.346052   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.348111   62924 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:55:30.348560   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349130   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349267   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349470   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349692   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.349863   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.350590   62924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:55:30.360487   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.370657   62924 addons.go:239] Setting addon default-storageclass=true in "addons-407417"
	I1101 09:55:30.370709   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.371229   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.378031   62924 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:55:30.379124   62924 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:55:30.379146   62924 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:55:30.379233   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.389653   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:55:30.390700   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:55:30.392329   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:55:30.395762   62924 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:55:30.396261   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:55:30.398303   62924 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:55:30.398324   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:55:30.398388   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.401754   62924 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 09:55:30.402214   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:55:30.407099   62924 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:55:30.407124   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:55:30.407199   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.407386   62924 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:55:30.407536   62924 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:55:30.409117   62924 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:55:30.409329   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:55:30.409537   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.409249   62924 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:55:30.410795   62924 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:55:30.410853   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.412834   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:55:30.414348   62924 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:55:30.416077   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:55:30.416174   62924 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:55:30.416440   62924 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:55:30.417150   62924 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:55:30.417833   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:55:30.417887   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.417444   62924 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:55:30.419203   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:55:30.420566   62924 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:55:30.420586   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:55:30.420639   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.421931   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:55:30.421947   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:55:30.422040   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.422191   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:55:30.422202   62924 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:55:30.422253   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.440802   62924 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:55:30.441965   62924 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:55:30.441986   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:55:30.442064   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.446753   62924 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:55:30.448279   62924 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:55:30.448346   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:55:30.448433   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	W1101 09:55:30.451017   62924 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:55:30.453680   62924 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:55:30.453923   62924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:55:30.454426   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.454650   62924 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-407417"
	I1101 09:55:30.454695   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:30.454819   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:55:30.454836   62924 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:55:30.454891   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.455135   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:30.462812   62924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:55:30.465095   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.470298   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.470368   62924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:55:30.471172   62924 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:55:30.471193   62924 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:55:30.471261   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.471550   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.472857   62924 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:55:30.472878   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:55:30.472937   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.480628   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.484746   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.486712   62924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:55:30.504624   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.507794   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.508293   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.509214   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.516695   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.518271   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.522869   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.525021   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.534459   62924 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W1101 09:55:30.534844   62924 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 09:55:30.536838   62924 retry.go:31] will retry after 257.655026ms: ssh: handshake failed: EOF
	I1101 09:55:30.536772   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.538084   62924 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:55:30.539071   62924 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:55:30.539123   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:55:30.539192   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:30.565703   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:30.566585   62924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:55:30.647526   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:55:30.649366   62924 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:30.649386   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:55:30.649413   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:55:30.649427   62924 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:55:30.669621   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:30.674479   62924 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:55:30.674517   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:55:30.679651   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:55:30.683055   62924 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:55:30.683076   62924 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:55:30.707569   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:55:30.707653   62924 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:55:30.708224   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:55:30.710066   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:55:30.711585   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:55:30.711651   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:55:30.715755   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:55:30.716310   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:55:30.735568   62924 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:55:30.735598   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:55:30.736109   62924 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:55:30.736128   62924 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:55:30.740592   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:55:30.740613   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:55:30.746283   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:55:30.763582   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:55:30.765850   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:55:30.765898   62924 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:55:30.766127   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:55:30.802693   62924 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:55:30.802717   62924 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:55:30.803103   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:55:30.805372   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:55:30.805437   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:55:30.814129   62924 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:55:30.814149   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:55:30.865998   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:55:30.866852   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:55:30.866925   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:55:30.882000   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:55:30.938454   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:55:30.938575   62924 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:55:30.958774   62924 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 09:55:30.960665   62924 node_ready.go:35] waiting up to 6m0s for node "addons-407417" to be "Ready" ...
	I1101 09:55:31.016663   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:55:31.016761   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:55:31.043303   62924 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:55:31.043411   62924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:55:31.080306   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:55:31.080360   62924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:55:31.130054   62924 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:55:31.130149   62924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:55:31.143261   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:55:31.143346   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:55:31.197521   62924 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:55:31.197624   62924 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:55:31.230114   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:55:31.230216   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:55:31.249078   62924 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:55:31.249189   62924 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:55:31.272928   62924 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:55:31.273047   62924 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:55:31.288237   62924 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:55:31.288354   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:55:31.319105   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:55:31.339355   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:55:31.468141   62924 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-407417" context rescaled to 1 replicas
	W1101 09:55:31.485853   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:31.485893   62924 retry.go:31] will retry after 311.464742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:31.797564   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:31.936227   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.18990242s)
	I1101 09:55:31.936270   62924 addons.go:480] Verifying addon ingress=true in "addons-407417"
	I1101 09:55:31.936355   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.17273797s)
	I1101 09:55:31.936413   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.170250146s)
	I1101 09:55:31.936571   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.070354216s)
	I1101 09:55:31.936593   62924 addons.go:480] Verifying addon metrics-server=true in "addons-407417"
	I1101 09:55:31.936458   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.133311623s)
	I1101 09:55:31.936638   62924 addons.go:480] Verifying addon registry=true in "addons-407417"
	I1101 09:55:31.936679   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.054656045s)
	I1101 09:55:31.937755   62924 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-407417 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:55:31.937764   62924 out.go:179] * Verifying ingress addon...
	I1101 09:55:31.937808   62924 out.go:179] * Verifying registry addon...
	I1101 09:55:31.940156   62924 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:55:31.940331   62924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:55:31.942801   62924 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:55:31.942826   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:31.942880   62924 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:55:31.942900   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:32.141586   62924 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-407417"
	I1101 09:55:32.143189   62924 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:55:32.145228   62924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:55:32.148804   62924 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:55:32.148821   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:32.444706   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:32.444759   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:32.465751   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.126350642s)
	W1101 09:55:32.465797   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:55:32.465825   62924 retry.go:31] will retry after 348.868544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	W1101 09:55:32.511143   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:32.511180   62924 retry.go:31] will retry after 538.435574ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:32.649032   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:32.815846   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:55:32.943051   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:32.943256   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:32.964311   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:33.050451   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:33.149406   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:33.443801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:33.443998   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:33.647944   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:33.943399   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:33.943549   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:34.148316   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:34.444116   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:34.444333   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:34.648878   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:34.943960   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:34.944072   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:35.148744   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:35.282459   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.466561073s)
	I1101 09:55:35.282554   62924 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.232073363s)
	W1101 09:55:35.282593   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:35.282625   62924 retry.go:31] will retry after 497.339744ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:35.443843   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:35.443897   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:35.462626   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:35.648822   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:35.780379   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:35.944100   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:35.944265   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:36.149042   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:55:36.313312   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:36.313348   62924 retry.go:31] will retry after 1.000582141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:36.443301   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:36.443536   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:36.648688   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:36.943816   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:36.944008   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:37.148919   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:37.314803   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:37.444840   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:37.444910   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:37.463405   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:37.648649   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 09:55:37.840791   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:37.840826   62924 retry.go:31] will retry after 1.024024598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:37.944219   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:37.944385   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:38.060906   62924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:55:38.060985   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:38.077255   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:38.148442   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:38.181261   62924 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:55:38.193621   62924 addons.go:239] Setting addon gcp-auth=true in "addons-407417"
	I1101 09:55:38.193681   62924 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:55:38.194032   62924 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:55:38.211778   62924 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:55:38.211827   62924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:55:38.228040   62924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:55:38.324947   62924 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:55:38.326049   62924 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:55:38.326952   62924 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:55:38.326967   62924 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:55:38.339666   62924 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:55:38.339686   62924 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:55:38.352129   62924 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:55:38.352151   62924 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:55:38.364446   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:55:38.443256   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:38.443420   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:38.649809   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:38.659734   62924 addons.go:480] Verifying addon gcp-auth=true in "addons-407417"
	I1101 09:55:38.661061   62924 out.go:179] * Verifying gcp-auth addon...
	I1101 09:55:38.662968   62924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:55:38.750052   62924 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:55:38.750074   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:38.865113   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:38.943097   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:38.943309   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:39.148720   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:39.166120   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:55:39.397598   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:39.397631   62924 retry.go:31] will retry after 1.062181945s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:39.443582   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:39.443698   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:39.463750   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:39.648701   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:39.666150   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:39.943075   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:39.943160   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:40.148818   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:40.165889   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:40.443875   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:40.444061   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:40.460166   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:40.648112   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:40.665624   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:40.943746   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:40.943881   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:40.985976   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:40.986019   62924 retry.go:31] will retry after 3.554677844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:41.148761   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:41.166011   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:41.443797   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:41.443985   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:41.648113   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:41.666461   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:41.943214   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:41.943361   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:41.964204   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:42.148944   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:42.166354   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:42.443285   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:42.443594   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:42.648746   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:42.665971   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:42.944257   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:42.944321   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:43.148471   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:43.165768   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:43.443726   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:43.443770   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:43.649224   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:43.665513   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:43.943594   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:43.943728   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:44.148723   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:44.166068   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:44.442873   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:44.442890   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:44.462940   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:44.541150   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:44.648481   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:44.666188   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:44.944192   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:44.944206   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:55:45.084419   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:45.084453   62924 retry.go:31] will retry after 5.991451126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:45.148301   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:45.166010   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:45.444246   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:45.444341   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:45.648556   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:45.666030   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:45.943760   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:45.943802   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:46.147860   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:46.166356   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:46.443115   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:46.443359   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:46.463487   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:46.648315   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:46.665621   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:46.943886   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:46.943897   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:47.148048   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:47.166596   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:47.443712   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:47.443922   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:47.648561   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:47.665971   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:47.944261   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:47.944311   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:48.148111   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:48.166206   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:48.442783   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:48.442927   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:48.648328   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:48.665737   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:48.943787   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:48.943951   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:48.963063   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:49.148920   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:49.165846   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:49.443752   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:49.443963   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:49.648730   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:49.666026   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:49.943590   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:49.943800   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:50.148671   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:50.166175   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:50.443097   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:50.443190   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:50.647815   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:50.665987   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:50.943190   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:50.943193   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:51.076292   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:55:51.148460   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:51.166050   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:51.443169   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:51.443194   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:51.463930   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	W1101 09:55:51.604296   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:51.604335   62924 retry.go:31] will retry after 8.682890672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:55:51.647626   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:51.665978   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:51.943858   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:51.943923   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:52.148772   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:52.166176   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:52.442853   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:52.443033   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:52.648724   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:52.666167   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:52.943443   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:52.943596   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:53.148915   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:53.165952   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:53.443657   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:53.443815   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:53.648553   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:53.665955   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:53.944074   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:53.944217   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:53.963652   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:54.148291   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:54.165601   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:54.443573   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:54.443636   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:54.648613   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:54.665920   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:54.944416   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:54.944465   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:55.148860   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:55.166188   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:55.442834   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:55.442921   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:55.648824   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:55.665896   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:55.943713   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:55.943913   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:56.148644   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:56.165797   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:56.443612   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:56.443806   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:56.464099   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:56.648763   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:56.666056   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:56.943000   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:56.943175   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:57.148126   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:57.165569   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:57.443483   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:57.443631   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:57.648626   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:57.665521   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:57.943473   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:57.943476   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:58.148230   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:58.165327   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:58.443045   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:58.443185   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:58.648081   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:58.666302   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:58.943100   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:58.943161   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:55:58.963154   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:55:59.148778   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:59.166146   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:59.442730   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:59.442913   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:55:59.648625   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:55:59.665891   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:55:59.943752   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:55:59.943796   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:00.148679   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:00.166041   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:00.288244   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:56:00.443443   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:00.444249   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:00.648288   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:00.666213   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:56:00.815625   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:00.815662   62924 retry.go:31] will retry after 8.529180304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:00.943719   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:00.943783   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:00.963888   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:01.148717   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:01.166620   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:01.443584   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:01.443616   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:01.648632   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:01.666227   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:01.942995   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:01.943145   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:02.148547   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:02.166229   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:02.443158   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:02.443988   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:02.648620   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:02.666296   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:02.943408   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:02.943474   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:02.964297   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:03.149075   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:03.167039   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:03.444755   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:03.444793   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:03.648480   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:03.666124   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:03.943470   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:03.943639   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:04.148543   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:04.166232   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:04.443091   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:04.443148   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:04.649075   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:04.667156   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:04.943287   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:04.943295   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:05.148063   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:05.166654   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:05.443961   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:05.444214   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:05.463796   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:05.649092   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:05.665719   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:05.943552   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:05.943593   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:06.149007   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:06.166767   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:06.443664   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:06.443787   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:06.648264   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:06.665717   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:06.943592   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:06.943659   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:07.148682   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:07.166028   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:07.442936   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:07.443018   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:07.464133   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:07.648841   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:07.666374   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:07.943411   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:07.943490   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:08.148262   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:08.165877   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:08.443813   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:08.444041   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:08.648654   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:08.666416   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:08.943376   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:08.943597   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:09.148685   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:09.166177   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:09.345420   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:56:09.443640   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:09.443692   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:09.464187   62924 node_ready.go:57] node "addons-407417" has "Ready":"False" status (will retry)
	I1101 09:56:09.647859   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:09.666443   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:56:09.884154   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:09.884188   62924 retry.go:31] will retry after 8.502826362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:09.943735   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:09.943920   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:10.148837   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:10.166095   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:10.443078   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:10.443118   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:10.648552   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:10.665840   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:10.943696   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:10.943743   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:11.148634   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:11.166013   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:11.442984   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:11.443136   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:11.648221   62924 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:56:11.648247   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:11.668569   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:11.946013   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:11.946146   62924 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:56:11.946159   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:11.963524   62924 node_ready.go:49] node "addons-407417" is "Ready"
	I1101 09:56:11.963557   62924 node_ready.go:38] duration metric: took 41.002865653s for node "addons-407417" to be "Ready" ...
	I1101 09:56:11.963577   62924 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:56:11.963719   62924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:56:11.984201   62924 api_server.go:72] duration metric: took 41.648963665s to wait for apiserver process to appear ...
	I1101 09:56:11.984233   62924 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:56:11.984278   62924 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 09:56:11.991254   62924 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 09:56:11.992307   62924 api_server.go:141] control plane version: v1.34.1
	I1101 09:56:11.992336   62924 api_server.go:131] duration metric: took 8.088882ms to wait for apiserver health ...
	I1101 09:56:11.992391   62924 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:56:12.051053   62924 system_pods.go:59] 20 kube-system pods found
	I1101 09:56:12.051165   62924 system_pods.go:61] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:12.051191   62924 system_pods.go:61] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:12.051217   62924 system_pods.go:61] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:12.051231   62924 system_pods.go:61] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:12.051241   62924 system_pods.go:61] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:12.051247   62924 system_pods.go:61] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:12.051254   62924 system_pods.go:61] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:12.051260   62924 system_pods.go:61] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:12.051273   62924 system_pods.go:61] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:12.051285   62924 system_pods.go:61] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:12.051309   62924 system_pods.go:61] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:12.051316   62924 system_pods.go:61] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:12.051324   62924 system_pods.go:61] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:12.051333   62924 system_pods.go:61] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:12.051345   62924 system_pods.go:61] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:12.051363   62924 system_pods.go:61] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:12.051375   62924 system_pods.go:61] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:12.051384   62924 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.051418   62924 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.051433   62924 system_pods.go:61] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:12.051442   62924 system_pods.go:74] duration metric: took 59.039724ms to wait for pod list to return data ...
	I1101 09:56:12.051457   62924 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:56:12.054133   62924 default_sa.go:45] found service account: "default"
	I1101 09:56:12.054158   62924 default_sa.go:55] duration metric: took 2.693328ms for default service account to be created ...
	I1101 09:56:12.054169   62924 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:56:12.148118   62924 system_pods.go:86] 20 kube-system pods found
	I1101 09:56:12.148156   62924 system_pods.go:89] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:12.148164   62924 system_pods.go:89] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:12.148174   62924 system_pods.go:89] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:12.148182   62924 system_pods.go:89] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:12.148190   62924 system_pods.go:89] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:12.148198   62924 system_pods.go:89] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:12.148205   62924 system_pods.go:89] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:12.148210   62924 system_pods.go:89] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:12.148215   62924 system_pods.go:89] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:12.148224   62924 system_pods.go:89] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:12.148233   62924 system_pods.go:89] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:12.148239   62924 system_pods.go:89] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:12.148247   62924 system_pods.go:89] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:12.148258   62924 system_pods.go:89] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:12.148268   62924 system_pods.go:89] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:12.148279   62924 system_pods.go:89] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:12.148287   62924 system_pods.go:89] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:12.148294   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.148306   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.148314   62924 system_pods.go:89] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:12.148335   62924 retry.go:31] will retry after 238.602811ms: missing components: kube-dns
	I1101 09:56:12.149169   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:12.165452   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:12.392951   62924 system_pods.go:86] 20 kube-system pods found
	I1101 09:56:12.392999   62924 system_pods.go:89] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:12.393009   62924 system_pods.go:89] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:12.393019   62924 system_pods.go:89] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:12.393027   62924 system_pods.go:89] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:12.393034   62924 system_pods.go:89] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:12.393042   62924 system_pods.go:89] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:12.393049   62924 system_pods.go:89] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:12.393058   62924 system_pods.go:89] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:12.393063   62924 system_pods.go:89] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:12.393072   62924 system_pods.go:89] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:12.393077   62924 system_pods.go:89] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:12.393087   62924 system_pods.go:89] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:12.393095   62924 system_pods.go:89] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:12.393107   62924 system_pods.go:89] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:12.393116   62924 system_pods.go:89] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:12.393122   62924 system_pods.go:89] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:12.393130   62924 system_pods.go:89] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:12.393138   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.393151   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.393161   62924 system_pods.go:89] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:12.393181   62924 retry.go:31] will retry after 357.856743ms: missing components: kube-dns
	I1101 09:56:12.444267   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:12.444301   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:12.648532   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:12.666448   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:12.757150   62924 system_pods.go:86] 20 kube-system pods found
	I1101 09:56:12.757192   62924 system_pods.go:89] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:12.757208   62924 system_pods.go:89] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:56:12.757219   62924 system_pods.go:89] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:12.757228   62924 system_pods.go:89] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:12.757241   62924 system_pods.go:89] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:12.757247   62924 system_pods.go:89] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:12.757259   62924 system_pods.go:89] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:12.757266   62924 system_pods.go:89] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:12.757271   62924 system_pods.go:89] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:12.757282   62924 system_pods.go:89] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:12.757287   62924 system_pods.go:89] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:12.757294   62924 system_pods.go:89] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:12.757302   62924 system_pods.go:89] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:12.757314   62924 system_pods.go:89] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:12.757321   62924 system_pods.go:89] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:12.757332   62924 system_pods.go:89] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:12.757353   62924 system_pods.go:89] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:12.757362   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.757378   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:12.757387   62924 system_pods.go:89] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:56:12.757408   62924 retry.go:31] will retry after 409.377431ms: missing components: kube-dns
	I1101 09:56:12.943939   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:12.944175   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:13.149265   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:13.165508   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:13.170922   62924 system_pods.go:86] 20 kube-system pods found
	I1101 09:56:13.170955   62924 system_pods.go:89] "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:56:13.170963   62924 system_pods.go:89] "coredns-66bc5c9577-gp9gr" [0ee3c912-ced4-4f4b-953f-d678b6fd20a4] Running
	I1101 09:56:13.170973   62924 system_pods.go:89] "csi-hostpath-attacher-0" [b7d80cb8-3953-495e-8ff1-355cb55f7ea0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:56:13.170980   62924 system_pods.go:89] "csi-hostpath-resizer-0" [b978cead-a653-4b1a-a343-39d3661a9db5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:56:13.171003   62924 system_pods.go:89] "csi-hostpathplugin-znf7c" [b84da354-d1e0-4555-9d57-a3c3e64663ab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:56:13.171015   62924 system_pods.go:89] "etcd-addons-407417" [420e5b78-bddc-441b-914e-21a22b02c8e6] Running
	I1101 09:56:13.171022   62924 system_pods.go:89] "kindnet-662bf" [de1770a9-8ee3-4d49-a598-db9216fb6921] Running
	I1101 09:56:13.171033   62924 system_pods.go:89] "kube-apiserver-addons-407417" [e4196ce7-ee9a-4f1f-8098-8b16da77ef57] Running
	I1101 09:56:13.171039   62924 system_pods.go:89] "kube-controller-manager-addons-407417" [da16686d-5f04-4b11-b9cd-678c5d8575c4] Running
	I1101 09:56:13.171049   62924 system_pods.go:89] "kube-ingress-dns-minikube" [d9f29d4b-f12f-473a-bc27-3e9e258e7e8f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:56:13.171058   62924 system_pods.go:89] "kube-proxy-f5sgj" [f3879aa7-11a0-4687-aa24-06bb786c5687] Running
	I1101 09:56:13.171064   62924 system_pods.go:89] "kube-scheduler-addons-407417" [9f8d50fb-9742-46d2-80e4-66fe5b9d6518] Running
	I1101 09:56:13.171072   62924 system_pods.go:89] "metrics-server-85b7d694d7-tbn2d" [829a0a39-ab34-4ab2-97ab-4cc6d1ec1844] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:56:13.171082   62924 system_pods.go:89] "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:56:13.171095   62924 system_pods.go:89] "registry-6b586f9694-httq4" [4d19dfc4-d429-42bc-af5d-d46aeca3a22c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:56:13.171104   62924 system_pods.go:89] "registry-creds-764b6fb674-v2bwb" [88aa6c8f-5e6d-48b4-bb4c-c7607072966d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:56:13.171113   62924 system_pods.go:89] "registry-proxy-cz772" [84b0726b-ceae-40c2-821a-3ac4237df885] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:56:13.171122   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dtxff" [cde0609f-d3d4-470e-a2fa-7b0b2fde0d76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:13.171135   62924 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmmp8" [3a7db4a4-1576-4a0c-b776-dcc8d030d5d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:56:13.171142   62924 system_pods.go:89] "storage-provisioner" [52bc6536-8dd7-4041-afa6-16ff16f38e7e] Running
	I1101 09:56:13.171157   62924 system_pods.go:126] duration metric: took 1.116980044s to wait for k8s-apps to be running ...
	I1101 09:56:13.171180   62924 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:56:13.171235   62924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:56:13.185227   62924 system_svc.go:56] duration metric: took 14.034108ms WaitForService to wait for kubelet
	I1101 09:56:13.185267   62924 kubeadm.go:587] duration metric: took 42.850036668s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:56:13.185303   62924 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:56:13.188121   62924 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:56:13.188177   62924 node_conditions.go:123] node cpu capacity is 8
	I1101 09:56:13.188195   62924 node_conditions.go:105] duration metric: took 2.886045ms to run NodePressure ...
	I1101 09:56:13.188207   62924 start.go:242] waiting for startup goroutines ...
	I1101 09:56:13.443266   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:13.443309   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:13.648607   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:13.666086   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:13.943111   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:13.943261   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:14.149383   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:14.165628   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:14.443750   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:14.443809   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:14.648764   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:14.665890   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:14.945557   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:14.945886   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:15.149436   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:15.165691   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:15.443950   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:15.443977   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:15.649685   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:15.665924   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:15.944458   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:15.944572   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:16.149027   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:16.166100   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:16.442801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:16.442830   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:16.648961   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:16.666078   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:16.943171   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:16.943187   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:17.149252   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:17.165612   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:17.444025   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:17.444059   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:17.649526   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:17.665983   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:17.942987   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:17.943135   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:18.149395   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:18.165461   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:18.387770   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:56:18.444625   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:18.444801   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:18.648836   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:18.666148   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:18.944225   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:18.944279   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 09:56:19.073162   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:19.073198   62924 retry.go:31] will retry after 16.319584627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:19.149380   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:19.165710   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:19.444355   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:19.444389   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:19.648748   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:19.666511   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:19.943648   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:19.943737   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:20.149133   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:20.166585   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:20.444250   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:20.444371   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:20.648365   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:20.665984   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:20.945142   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:20.945278   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:21.149193   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:21.249824   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:21.444376   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:21.444386   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:21.648919   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:21.666798   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:21.944032   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:21.944073   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:22.149638   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:22.165671   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:22.444142   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:22.444161   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:22.649388   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:22.665881   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:22.944036   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:22.944194   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:23.149400   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:23.165470   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:23.443385   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:23.443461   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:23.648286   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:23.665380   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:23.942985   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:23.943087   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:24.149083   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:24.166221   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:24.443727   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:24.443732   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:24.648560   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:24.665702   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:24.944059   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:24.944212   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:25.149010   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:25.166390   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:25.443343   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:25.443349   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:25.648233   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:25.665042   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:25.942853   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:25.942874   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:26.149830   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:26.167644   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:26.445517   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:26.445811   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:26.649041   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:26.666096   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:26.943475   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:26.943525   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:27.148920   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:27.166649   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:27.444513   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:27.444560   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:27.649376   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:27.734058   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:27.944386   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:27.944426   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:28.157065   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:28.166703   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:28.443840   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:28.443930   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:28.649292   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:28.665798   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:28.944246   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:28.944374   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:29.148779   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:29.166348   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:29.443745   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:29.443763   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:29.649308   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:29.665846   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:29.943877   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:29.943957   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:30.168755   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:30.168806   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:30.444247   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:30.444263   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:30.648989   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:30.666573   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:30.943031   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:30.943299   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:31.148653   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:31.165917   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:31.444207   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:31.444286   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:31.649581   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:31.749476   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:31.944878   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:31.946029   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:32.148462   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:32.165922   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:32.444138   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:32.444178   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:32.649213   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:32.666521   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:32.943184   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:32.943362   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:33.149801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:33.167868   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:33.443469   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:33.443532   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:33.648775   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:33.665985   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:33.944774   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:33.944805   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:34.149332   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:34.166776   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:34.443957   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:34.444065   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:34.649315   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:34.666239   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:34.943161   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:34.943346   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:35.149240   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:35.165335   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:35.393650   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:56:35.443855   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:35.443953   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:35.648897   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:35.666081   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:35.943771   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:35.943837   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 09:56:35.948052   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:35.948088   62924 retry.go:31] will retry after 39.728567543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:56:36.150684   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:36.166361   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:36.444209   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:36.444247   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:36.649123   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:36.666606   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:36.944628   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:36.944672   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:37.148897   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:37.166199   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:37.443801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:37.443828   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:37.649178   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:37.666894   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:37.944407   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:37.944475   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:38.148602   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:38.165697   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:38.444122   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:38.444365   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:38.648925   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:38.666324   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:38.943321   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:38.943321   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:39.149408   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:39.165247   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:39.443313   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:39.443480   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:39.648189   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:39.666234   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:39.943340   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:39.943376   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:40.148396   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:40.166104   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:40.444896   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:40.444898   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:40.651162   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:40.666588   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:40.944078   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:40.944238   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:41.149691   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:41.166459   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:41.443871   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:41.443925   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:41.649136   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:41.666599   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:41.944043   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:41.944094   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:42.149359   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:42.165333   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:42.443301   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:42.443546   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:42.648193   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:42.665356   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:42.943226   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:42.943349   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:43.148429   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:43.165463   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:43.444112   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:43.444159   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:43.649784   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:43.666275   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:43.943189   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:43.943273   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:44.149897   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:44.166448   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:44.443760   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:44.443825   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:44.649180   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:44.665767   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:44.944007   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:44.944145   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:45.149462   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:45.165624   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:45.444568   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:45.447390   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:45.649519   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:45.668536   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:45.943908   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:56:45.943967   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:46.149425   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:46.165791   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:46.444106   62924 kapi.go:107] duration metric: took 1m14.503772828s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:56:46.444152   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:46.656451   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:46.665556   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:46.944323   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:47.148865   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:47.165781   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:47.444255   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:47.649641   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:47.666332   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:47.945267   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:48.148432   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:48.166189   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:48.447057   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:48.648775   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:48.666330   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:48.943903   62924 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:56:49.149569   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:49.166140   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:49.443669   62924 kapi.go:107] duration metric: took 1m17.50350601s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:56:49.649059   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:49.666382   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:50.148640   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:50.165731   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:50.649330   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:50.665902   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:51.148651   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:51.166049   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:51.648311   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:51.665455   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:52.149659   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:52.166519   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:52.649095   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:52.666629   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:53.149170   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:53.165448   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:53.648359   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:53.665313   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:54.148253   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:54.165343   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:54.649244   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:54.665350   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:55.148733   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:55.166582   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:55.648811   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:55.666151   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:56:56.149439   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:56.167252   62924 kapi.go:107] duration metric: took 1m17.504279439s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:56:56.170202   62924 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-407417 cluster.
	I1101 09:56:56.171613   62924 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:56:56.172732   62924 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:56:56.648844   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:57.149383   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:57.649360   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:58.149881   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:58.649457   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:59.149470   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:56:59.648801   62924 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:57:00.149373   62924 kapi.go:107] duration metric: took 1m28.00413859s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:57:15.679269   62924 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:57:16.227869   62924 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:57:16.227987   62924 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:57:16.229761   62924 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1101 09:57:16.230697   62924 addons.go:515] duration metric: took 1m45.895139161s for enable addons: enabled=[registry-creds amd-gpu-device-plugin nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1101 09:57:16.230735   62924 start.go:247] waiting for cluster config update ...
	I1101 09:57:16.230760   62924 start.go:256] writing updated cluster config ...
	I1101 09:57:16.231008   62924 ssh_runner.go:195] Run: rm -f paused
	I1101 09:57:16.234923   62924 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:57:16.238354   62924 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gp9gr" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.242420   62924 pod_ready.go:94] pod "coredns-66bc5c9577-gp9gr" is "Ready"
	I1101 09:57:16.242442   62924 pod_ready.go:86] duration metric: took 4.065738ms for pod "coredns-66bc5c9577-gp9gr" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.244614   62924 pod_ready.go:83] waiting for pod "etcd-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.247958   62924 pod_ready.go:94] pod "etcd-addons-407417" is "Ready"
	I1101 09:57:16.247980   62924 pod_ready.go:86] duration metric: took 3.345138ms for pod "etcd-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.249914   62924 pod_ready.go:83] waiting for pod "kube-apiserver-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.253148   62924 pod_ready.go:94] pod "kube-apiserver-addons-407417" is "Ready"
	I1101 09:57:16.253170   62924 pod_ready.go:86] duration metric: took 3.232779ms for pod "kube-apiserver-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.254878   62924 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.638668   62924 pod_ready.go:94] pod "kube-controller-manager-addons-407417" is "Ready"
	I1101 09:57:16.638698   62924 pod_ready.go:86] duration metric: took 383.799572ms for pod "kube-controller-manager-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:16.838923   62924 pod_ready.go:83] waiting for pod "kube-proxy-f5sgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:17.239049   62924 pod_ready.go:94] pod "kube-proxy-f5sgj" is "Ready"
	I1101 09:57:17.239080   62924 pod_ready.go:86] duration metric: took 400.130153ms for pod "kube-proxy-f5sgj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:17.439526   62924 pod_ready.go:83] waiting for pod "kube-scheduler-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:17.838369   62924 pod_ready.go:94] pod "kube-scheduler-addons-407417" is "Ready"
	I1101 09:57:17.838402   62924 pod_ready.go:86] duration metric: took 398.851419ms for pod "kube-scheduler-addons-407417" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:57:17.838419   62924 pod_ready.go:40] duration metric: took 1.603459028s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:57:17.881002   62924 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:57:17.882582   62924 out.go:179] * Done! kubectl is now configured to use "addons-407417" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.69274187Z" level=info msg="Removing container: 460c16796789fcd248661392a4ee4d72c92c68a83f3e690721ea255b09104d81" id=3ad28eb5-deae-4fce-a09c-6e0979e7f8a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.698991297Z" level=info msg="Removed container 460c16796789fcd248661392a4ee4d72c92c68a83f3e690721ea255b09104d81: gcp-auth/gcp-auth-certs-create-6pvcw/create" id=3ad28eb5-deae-4fce-a09c-6e0979e7f8a2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.701346885Z" level=info msg="Stopping pod sandbox: ac060788eda4234619dc8503fd9e32b95a27d86eb6e6b3d8ef01567a975228fa" id=f295945a-4b23-480c-910a-c92e77a8d26a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.701387852Z" level=info msg="Stopped pod sandbox (already stopped): ac060788eda4234619dc8503fd9e32b95a27d86eb6e6b3d8ef01567a975228fa" id=f295945a-4b23-480c-910a-c92e77a8d26a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.701708211Z" level=info msg="Removing pod sandbox: ac060788eda4234619dc8503fd9e32b95a27d86eb6e6b3d8ef01567a975228fa" id=5de86ff2-9a6f-4c5d-aa2d-6efb5bcbde65 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.704424421Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.70447696Z" level=info msg="Removed pod sandbox: ac060788eda4234619dc8503fd9e32b95a27d86eb6e6b3d8ef01567a975228fa" id=5de86ff2-9a6f-4c5d-aa2d-6efb5bcbde65 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.704900412Z" level=info msg="Stopping pod sandbox: 9ac20ad836a947caf6c14e56822152a3c9b97227d22619d052ab9685afef9a16" id=9eb48949-1c66-477b-8f7f-20a0e4bc9110 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.704935369Z" level=info msg="Stopped pod sandbox (already stopped): 9ac20ad836a947caf6c14e56822152a3c9b97227d22619d052ab9685afef9a16" id=9eb48949-1c66-477b-8f7f-20a0e4bc9110 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.705199096Z" level=info msg="Removing pod sandbox: 9ac20ad836a947caf6c14e56822152a3c9b97227d22619d052ab9685afef9a16" id=7a2f6e09-ba15-4feb-a857-308438580ed2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.70765792Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:57:24 addons-407417 crio[780]: time="2025-11-01T09:57:24.707702997Z" level=info msg="Removed pod sandbox: 9ac20ad836a947caf6c14e56822152a3c9b97227d22619d052ab9685afef9a16" id=7a2f6e09-ba15-4feb-a857-308438580ed2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.612954071Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386/POD" id=38d23031-6a42-4019-9bc5-461a8b565308 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.613052841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.67965789Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386 Namespace:local-path-storage ID:8937ac06cbb1b7291ef02ab04b29950f1d14eae90ff9d7c0441ffe8a1b873575 UID:1d37a5cb-fbba-4ff0-94a5-b78041de3ea1 NetNS:/var/run/netns/55cc2016-e50f-428a-a782-1bf6eba7e29e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003c27a0}] Aliases:map[]}"
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.679697034Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386 to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.690036413Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386 Namespace:local-path-storage ID:8937ac06cbb1b7291ef02ab04b29950f1d14eae90ff9d7c0441ffe8a1b873575 UID:1d37a5cb-fbba-4ff0-94a5-b78041de3ea1 NetNS:/var/run/netns/55cc2016-e50f-428a-a782-1bf6eba7e29e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003c27a0}] Aliases:map[]}"
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.690151497Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386 for CNI network kindnet (type=ptp)"
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.690963358Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.691860131Z" level=info msg="Ran pod sandbox 8937ac06cbb1b7291ef02ab04b29950f1d14eae90ff9d7c0441ffe8a1b873575 with infra container: local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386/POD" id=38d23031-6a42-4019-9bc5-461a8b565308 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.693019223Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2700bbcb-ae8f-4ab4-aa8d-de527c97d7fb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.693162339Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=2700bbcb-ae8f-4ab4-aa8d-de527c97d7fb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.693206254Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=2700bbcb-ae8f-4ab4-aa8d-de527c97d7fb name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.69376121Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=12948af0-7d16-41c4-ae61-84e8675739b6 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:57:29 addons-407417 crio[780]: time="2025-11-01T09:57:29.704491399Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a8e80b4f1ffa7       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   d4986ec902ef1       busybox                                     default
	f08090ded8635       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          30 seconds ago       Running             csi-snapshotter                          0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	4a12913519788       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          31 seconds ago       Running             csi-provisioner                          0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	fa059e5944f6d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            33 seconds ago       Running             liveness-probe                           0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	fa38ee36042b1       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           33 seconds ago       Running             hostpath                                 0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	7e4b0adebd6e4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 34 seconds ago       Running             gcp-auth                                 0                   7c5232e995896       gcp-auth-78565c9fb4-xnctl                   gcp-auth
	01e2a427bdd0a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            37 seconds ago       Running             gadget                                   0                   d36b2d107f3ec       gadget-swsl2                                gadget
	e2fa965dde20e       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                40 seconds ago       Running             node-driver-registrar                    0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	b4c2b31c1d67c       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             41 seconds ago       Running             controller                               0                   e5c96b9292f4f       ingress-nginx-controller-675c5ddd98-2fxqb   ingress-nginx
	26ca487996d46       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              45 seconds ago       Running             registry-proxy                           0                   a0542e1695748       registry-proxy-cz772                        kube-system
	209034ab12f22       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     48 seconds ago       Running             nvidia-device-plugin-ctr                 0                   07018c2dc826e       nvidia-device-plugin-daemonset-z5mvf        kube-system
	227a3dea494bb       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              52 seconds ago       Running             yakd                                     0                   116cc4c7c5337       yakd-dashboard-5ff678cb9-7p2rl              yakd-dashboard
	bddb1deaf2b50       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   55 seconds ago       Running             csi-external-health-monitor-controller   0                   4c6fe39cf6260       csi-hostpathplugin-znf7c                    kube-system
	febf4ba9fa488       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     56 seconds ago       Running             amd-gpu-device-plugin                    0                   720f494368ff8       amd-gpu-device-plugin-f46dd                 kube-system
	3901514c12896       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      58 seconds ago       Running             volume-snapshot-controller               0                   0613b67e144b5       snapshot-controller-7d9fbc56b8-nmmp8        kube-system
	dea585cf0fda5       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      58 seconds ago       Running             volume-snapshot-controller               0                   dfc6c327de889       snapshot-controller-7d9fbc56b8-dtxff        kube-system
	1a43d3e93f88a       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               59 seconds ago       Running             minikube-ingress-dns                     0                   9db5c62c6a0d8       kube-ingress-dns-minikube                   kube-system
	b0fa2acbd6707       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   c59aa627a516f       local-path-provisioner-648f6765c9-zxm6m     local-path-storage
	f1cbdd3dea0c8       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   6e2b63c0901c1       csi-hostpath-attacher-0                     kube-system
	e9ee42459c8cc       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   88de74db94407       csi-hostpath-resizer-0                      kube-system
	c1eb0ca70ea95       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             About a minute ago   Exited              patch                                    1                   41751ecaf9569       ingress-nginx-admission-patch-ppfh2         ingress-nginx
	8027ccccaa983       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   4748bf76000a3       ingress-nginx-admission-create-hqmb4        ingress-nginx
	84efcf67417dc       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   2e375288dcd20       cloud-spanner-emulator-86bd5cbb97-rvb7g     default
	c21e111d12956       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   84a6a789764c0       registry-6b586f9694-httq4                   kube-system
	ee71e1d3f20be       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   d245a22e7da01       metrics-server-85b7d694d7-tbn2d             kube-system
	1800341cdaf4e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   7d5dc12fac659       coredns-66bc5c9577-gp9gr                    kube-system
	c652bc696ccca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   e9b2ea2d0d732       storage-provisioner                         kube-system
	3e48e42054985       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   b09d806c02ee1       kindnet-662bf                               kube-system
	93000561a31e8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   c87fe66aa0e17       kube-proxy-f5sgj                            kube-system
	b02d1d64a55b7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   184c6f0792c8f       kube-apiserver-addons-407417                kube-system
	7f28a4faf3888       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   b87e1f7e6a0fc       kube-scheduler-addons-407417                kube-system
	6aaf19e53fbb2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   eb47bec278ac3       etcd-addons-407417                          kube-system
	4d0958fc37fb7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   603217c00fc86       kube-controller-manager-addons-407417       kube-system
	
	
	==> coredns [1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2] <==
	[INFO] 10.244.0.18:57603 - 52760 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.005713504s
	[INFO] 10.244.0.18:34583 - 41580 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000083202s
	[INFO] 10.244.0.18:34583 - 41249 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000107423s
	[INFO] 10.244.0.18:47172 - 41126 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00008536s
	[INFO] 10.244.0.18:47172 - 40812 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000096114s
	[INFO] 10.244.0.18:58180 - 49156 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000047323s
	[INFO] 10.244.0.18:58180 - 48931 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00007074s
	[INFO] 10.244.0.18:39129 - 10026 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129633s
	[INFO] 10.244.0.18:39129 - 10422 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000170126s
	[INFO] 10.244.0.22:58063 - 30064 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000163168s
	[INFO] 10.244.0.22:54429 - 51935 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000170045s
	[INFO] 10.244.0.22:54414 - 32869 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154336s
	[INFO] 10.244.0.22:45434 - 61937 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000163883s
	[INFO] 10.244.0.22:50280 - 15568 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128966s
	[INFO] 10.244.0.22:50108 - 45198 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000165369s
	[INFO] 10.244.0.22:46358 - 29999 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.002676996s
	[INFO] 10.244.0.22:35014 - 39027 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004469468s
	[INFO] 10.244.0.22:50319 - 45152 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005439828s
	[INFO] 10.244.0.22:40012 - 54366 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006113419s
	[INFO] 10.244.0.22:52075 - 18071 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005821467s
	[INFO] 10.244.0.22:58890 - 31970 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005954755s
	[INFO] 10.244.0.22:33573 - 51820 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004648093s
	[INFO] 10.244.0.22:35075 - 19637 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007652379s
	[INFO] 10.244.0.22:51804 - 7827 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000794878s
	[INFO] 10.244.0.22:37747 - 7433 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001788876s
	
	
	==> describe nodes <==
	Name:               addons-407417
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-407417
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=addons-407417
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_55_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-407417
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-407417"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:55:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-407417
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:57:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:57:27 +0000   Sat, 01 Nov 2025 09:55:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:57:27 +0000   Sat, 01 Nov 2025 09:55:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:57:27 +0000   Sat, 01 Nov 2025 09:55:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:57:27 +0000   Sat, 01 Nov 2025 09:56:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-407417
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                93d9c905-5f59-4697-8bdc-5b43720cd9fb
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-rvb7g                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  gadget                      gadget-swsl2                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  gcp-auth                    gcp-auth-78565c9fb4-xnctl                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-2fxqb                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         119s
	  kube-system                 amd-gpu-device-plugin-f46dd                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 coredns-66bc5c9577-gp9gr                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 csi-hostpathplugin-znf7c                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 etcd-addons-407417                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-662bf                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-addons-407417                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-addons-407417                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-f5sgj                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-addons-407417                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 metrics-server-85b7d694d7-tbn2d                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         119s
	  kube-system                 nvidia-device-plugin-daemonset-z5mvf                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 registry-6b586f9694-httq4                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-creds-764b6fb674-v2bwb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 registry-proxy-cz772                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 snapshot-controller-7d9fbc56b8-dtxff                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 snapshot-controller-7d9fbc56b8-nmmp8                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  local-path-storage          helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-648f6765c9-zxm6m                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-7p2rl                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 118s  kube-proxy       
	  Normal  Starting                 2m6s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s  kubelet          Node addons-407417 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s  kubelet          Node addons-407417 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s  kubelet          Node addons-407417 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m1s  node-controller  Node addons-407417 event: Registered Node addons-407417 in Controller
	  Normal  NodeReady                79s   kubelet          Node addons-407417 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001888] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084009] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.368874] i8042: Warning: Keylock active
	[  +0.009947] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.455608] block sda: the capability attribute has been deprecated.
	[  +0.077240] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020831] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.657102] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda] <==
	{"level":"warn","ts":"2025-11-01T09:55:21.749448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.755782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.762332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.769079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.775124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.781359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.787205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.794169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.799883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.824990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.831002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:21.836885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:33.036513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:33.043060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:59.255060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:59.261408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:59.274046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:55:59.280228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:57:29.446981Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"159.99459ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041022195886575 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/test-pvc.1873d986176a7cea\" mod_revision:1286 > success:<request_put:<key:\"/registry/events/default/test-pvc.1873d986176a7cea\" value_size:818 lease:8128041022195886558 >> failure:<request_range:<key:\"/registry/events/default/test-pvc.1873d986176a7cea\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T09:57:29.447105Z","caller":"traceutil/trace.go:172","msg":"trace[1988407247] linearizableReadLoop","detail":"{readStateIndex:1330; appliedIndex:1329; }","duration":"158.563398ms","start":"2025-11-01T09:57:29.288525Z","end":"2025-11-01T09:57:29.447089Z","steps":["trace[1988407247] 'read index received'  (duration: 35.149µs)","trace[1988407247] 'applied index is now lower than readState.Index'  (duration: 158.527242ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:57:29.447140Z","caller":"traceutil/trace.go:172","msg":"trace[632461307] transaction","detail":"{read_only:false; response_revision:1289; number_of_response:1; }","duration":"182.705626ms","start":"2025-11-01T09:57:29.264413Z","end":"2025-11-01T09:57:29.447119Z","steps":["trace[632461307] 'process raft request'  (duration: 22.01965ms)","trace[632461307] 'compare'  (duration: 159.896616ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:57:29.447311Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.777261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386\" limit:1 ","response":"range_response_count:1 size:2886"}
	{"level":"info","ts":"2025-11-01T09:57:29.447350Z","caller":"traceutil/trace.go:172","msg":"trace[19721979] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386; range_end:; response_count:1; response_revision:1289; }","duration":"158.830264ms","start":"2025-11-01T09:57:29.288512Z","end":"2025-11-01T09:57:29.447342Z","steps":["trace[19721979] 'agreement among raft nodes before linearized reading'  (duration: 158.65309ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:57:29.447468Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.170946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattributesclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:57:29.447521Z","caller":"traceutil/trace.go:172","msg":"trace[1709287909] range","detail":"{range_begin:/registry/volumeattributesclasses; range_end:; response_count:0; response_revision:1289; }","duration":"128.203483ms","start":"2025-11-01T09:57:29.319284Z","end":"2025-11-01T09:57:29.447487Z","steps":["trace[1709287909] 'agreement among raft nodes before linearized reading'  (duration: 128.136599ms)"],"step_count":1}
	
	
	==> gcp-auth [7e4b0adebd6e4b4f76a5187c833c9926bbd8fe14a0b790415afb0904d52a6614] <==
	2025/11/01 09:56:55 GCP Auth Webhook started!
	2025/11/01 09:57:18 Ready to marshal response ...
	2025/11/01 09:57:18 Ready to write response ...
	2025/11/01 09:57:18 Ready to marshal response ...
	2025/11/01 09:57:18 Ready to write response ...
	2025/11/01 09:57:18 Ready to marshal response ...
	2025/11/01 09:57:18 Ready to write response ...
	2025/11/01 09:57:29 Ready to marshal response ...
	2025/11/01 09:57:29 Ready to write response ...
	2025/11/01 09:57:29 Ready to marshal response ...
	2025/11/01 09:57:29 Ready to write response ...
	
	
	==> kernel <==
	 09:57:30 up  1:39,  0 user,  load average: 1.24, 2.01, 2.15
	Linux addons-407417 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab] <==
	I1101 09:55:31.262887       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:55:31.263775       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:56:01.263379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 09:56:01.263465       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:56:01.263580       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 09:56:01.263673       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1101 09:56:02.563089       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:56:02.563124       1 metrics.go:72] Registering metrics
	I1101 09:56:02.563230       1 controller.go:711] "Syncing nftables rules"
	I1101 09:56:11.264122       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:56:11.264191       1 main.go:301] handling current node
	I1101 09:56:21.262459       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:56:21.262510       1 main.go:301] handling current node
	I1101 09:56:31.262745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:56:31.262781       1 main.go:301] handling current node
	I1101 09:56:41.265002       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:56:41.265050       1 main.go:301] handling current node
	I1101 09:56:51.263427       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:56:51.263461       1 main.go:301] handling current node
	I1101 09:57:01.262571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:57:01.262604       1 main.go:301] handling current node
	I1101 09:57:11.268167       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:57:11.268207       1 main.go:301] handling current node
	I1101 09:57:21.264596       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:57:21.264632       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2] <==
	I1101 09:55:38.608905       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.106.5.230"}
	W1101 09:55:59.255052       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:55:59.261402       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:55:59.273960       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:55:59.280186       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:56:11.586596       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.5.230:443: connect: connection refused
	W1101 09:56:11.586597       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.5.230:443: connect: connection refused
	E1101 09:56:11.586686       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.5.230:443: connect: connection refused" logger="UnhandledError"
	E1101 09:56:11.586708       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.5.230:443: connect: connection refused" logger="UnhandledError"
	W1101 09:56:11.604238       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.5.230:443: connect: connection refused
	E1101 09:56:11.604290       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.5.230:443: connect: connection refused" logger="UnhandledError"
	W1101 09:56:11.612021       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.5.230:443: connect: connection refused
	E1101 09:56:11.612054       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.5.230:443: connect: connection refused" logger="UnhandledError"
	W1101 09:56:14.864529       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:56:14.864605       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:56:14.864649       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.42.97:443: connect: connection refused" logger="UnhandledError"
	E1101 09:56:14.866542       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.42.97:443: connect: connection refused" logger="UnhandledError"
	E1101 09:56:14.872403       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.42.97:443: connect: connection refused" logger="UnhandledError"
	E1101 09:56:14.893729       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.42.97:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.42.97:443: connect: connection refused" logger="UnhandledError"
	I1101 09:56:14.973146       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:57:28.531215       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40350: use of closed network connection
	E1101 09:57:28.674920       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40372: use of closed network connection
	
	
	==> kube-controller-manager [4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9] <==
	I1101 09:55:29.236623       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:55:29.236629       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:55:29.236852       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:55:29.236900       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:55:29.236995       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:55:29.237108       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:55:29.238056       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:55:29.238074       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:55:29.238215       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:55:29.238231       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:55:29.239436       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:55:29.241749       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:55:29.243940       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:55:29.245025       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:55:29.250297       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:55:29.255541       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:55:29.257795       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 09:55:59.249166       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:55:59.249371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:55:59.249435       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:55:59.265299       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:55:59.269021       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:55:59.350009       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:55:59.369388       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:56:14.173525       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265] <==
	I1101 09:55:30.861019       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:55:31.351457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:55:31.452438       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:55:31.452596       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:55:31.452767       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:55:31.531157       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:55:31.531266       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:55:31.562518       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:55:31.574702       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:55:31.577442       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:55:31.582016       1 config.go:200] "Starting service config controller"
	I1101 09:55:31.582041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:55:31.582149       1 config.go:309] "Starting node config controller"
	I1101 09:55:31.582227       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:55:31.582269       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:55:31.582854       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:55:31.583014       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:55:31.582963       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:55:31.583109       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:55:31.682183       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:55:31.683525       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:55:31.683633       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45] <==
	E1101 09:55:22.263289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:55:22.263299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:55:22.263343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:55:22.263419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:55:22.263363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:55:22.263375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:55:22.263419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:55:22.263563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:55:22.263628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:55:22.263729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:55:22.263735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:55:22.263823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:55:23.071624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:55:23.123656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:55:23.272618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:55:23.277551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:55:23.290318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:55:23.323405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:55:23.345452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:55:23.347388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:55:23.389785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:55:23.416018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:55:23.479472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:55:23.570693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:55:26.260234       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:56:43 addons-407417 kubelet[1307]: E1101 09:56:43.413808    1307 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 01 09:56:43 addons-407417 kubelet[1307]: E1101 09:56:43.413903    1307 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88aa6c8f-5e6d-48b4-bb4c-c7607072966d-gcr-creds podName:88aa6c8f-5e6d-48b4-bb4c-c7607072966d nodeName:}" failed. No retries permitted until 2025-11-01 09:57:15.413883803 +0000 UTC m=+110.806253826 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/88aa6c8f-5e6d-48b4-bb4c-c7607072966d-gcr-creds") pod "registry-creds-764b6fb674-v2bwb" (UID: "88aa6c8f-5e6d-48b4-bb4c-c7607072966d") : secret "registry-creds-gcr" not found
	Nov 01 09:56:45 addons-407417 kubelet[1307]: I1101 09:56:45.974860    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cz772" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:56:45 addons-407417 kubelet[1307]: I1101 09:56:45.987001    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-cz772" podStartSLOduration=2.051765099 podStartE2EDuration="34.986979803s" podCreationTimestamp="2025-11-01 09:56:11 +0000 UTC" firstStartedPulling="2025-11-01 09:56:12.056940706 +0000 UTC m=+47.449310716" lastFinishedPulling="2025-11-01 09:56:44.992155395 +0000 UTC m=+80.384525420" observedRunningTime="2025-11-01 09:56:45.985725973 +0000 UTC m=+81.378096014" watchObservedRunningTime="2025-11-01 09:56:45.986979803 +0000 UTC m=+81.379349833"
	Nov 01 09:56:46 addons-407417 kubelet[1307]: I1101 09:56:46.978811    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cz772" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:56:48 addons-407417 kubelet[1307]: I1101 09:56:48.996382    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-2fxqb" podStartSLOduration=59.155384976 podStartE2EDuration="1m17.996361744s" podCreationTimestamp="2025-11-01 09:55:31 +0000 UTC" firstStartedPulling="2025-11-01 09:56:30.057261794 +0000 UTC m=+65.449631805" lastFinishedPulling="2025-11-01 09:56:48.898238564 +0000 UTC m=+84.290608573" observedRunningTime="2025-11-01 09:56:48.996113822 +0000 UTC m=+84.388483862" watchObservedRunningTime="2025-11-01 09:56:48.996361744 +0000 UTC m=+84.388731775"
	Nov 01 09:56:54 addons-407417 kubelet[1307]: I1101 09:56:54.019078    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-swsl2" podStartSLOduration=66.083266347 podStartE2EDuration="1m23.019055383s" podCreationTimestamp="2025-11-01 09:55:31 +0000 UTC" firstStartedPulling="2025-11-01 09:56:36.049416161 +0000 UTC m=+71.441786175" lastFinishedPulling="2025-11-01 09:56:52.985205198 +0000 UTC m=+88.377575211" observedRunningTime="2025-11-01 09:56:54.018223299 +0000 UTC m=+89.410593343" watchObservedRunningTime="2025-11-01 09:56:54.019055383 +0000 UTC m=+89.411425416"
	Nov 01 09:56:56 addons-407417 kubelet[1307]: I1101 09:56:56.026091    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-xnctl" podStartSLOduration=66.141869362 podStartE2EDuration="1m18.026070372s" podCreationTimestamp="2025-11-01 09:55:38 +0000 UTC" firstStartedPulling="2025-11-01 09:56:43.751692031 +0000 UTC m=+79.144062057" lastFinishedPulling="2025-11-01 09:56:55.635893054 +0000 UTC m=+91.028263067" observedRunningTime="2025-11-01 09:56:56.026055287 +0000 UTC m=+91.418425319" watchObservedRunningTime="2025-11-01 09:56:56.026070372 +0000 UTC m=+91.418440403"
	Nov 01 09:56:57 addons-407417 kubelet[1307]: I1101 09:56:57.747317    1307 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 01 09:56:57 addons-407417 kubelet[1307]: I1101 09:56:57.747359    1307 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 01 09:57:00 addons-407417 kubelet[1307]: I1101 09:57:00.056605    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-znf7c" podStartSLOduration=1.544589603 podStartE2EDuration="49.056581642s" podCreationTimestamp="2025-11-01 09:56:11 +0000 UTC" firstStartedPulling="2025-11-01 09:56:12.025404588 +0000 UTC m=+47.417774612" lastFinishedPulling="2025-11-01 09:56:59.537396635 +0000 UTC m=+94.929766651" observedRunningTime="2025-11-01 09:57:00.055069645 +0000 UTC m=+95.447439689" watchObservedRunningTime="2025-11-01 09:57:00.056581642 +0000 UTC m=+95.448951672"
	Nov 01 09:57:02 addons-407417 kubelet[1307]: I1101 09:57:02.691769    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e69c82-c641-4422-92e2-560194cf5db7" path="/var/lib/kubelet/pods/50e69c82-c641-4422-92e2-560194cf5db7/volumes"
	Nov 01 09:57:02 addons-407417 kubelet[1307]: I1101 09:57:02.692330    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea7878d-fef6-4959-a5f9-6dfa516878c2" path="/var/lib/kubelet/pods/cea7878d-fef6-4959-a5f9-6dfa516878c2/volumes"
	Nov 01 09:57:15 addons-407417 kubelet[1307]: E1101 09:57:15.458016    1307 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 01 09:57:15 addons-407417 kubelet[1307]: E1101 09:57:15.458151    1307 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88aa6c8f-5e6d-48b4-bb4c-c7607072966d-gcr-creds podName:88aa6c8f-5e6d-48b4-bb4c-c7607072966d nodeName:}" failed. No retries permitted until 2025-11-01 09:58:19.458122409 +0000 UTC m=+174.850492419 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/88aa6c8f-5e6d-48b4-bb4c-c7607072966d-gcr-creds") pod "registry-creds-764b6fb674-v2bwb" (UID: "88aa6c8f-5e6d-48b4-bb4c-c7607072966d") : secret "registry-creds-gcr" not found
	Nov 01 09:57:18 addons-407417 kubelet[1307]: I1101 09:57:18.480290    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/99a6b686-6484-4836-a66c-e292ed6386c7-gcp-creds\") pod \"busybox\" (UID: \"99a6b686-6484-4836-a66c-e292ed6386c7\") " pod="default/busybox"
	Nov 01 09:57:18 addons-407417 kubelet[1307]: I1101 09:57:18.480440    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k7gl\" (UniqueName: \"kubernetes.io/projected/99a6b686-6484-4836-a66c-e292ed6386c7-kube-api-access-9k7gl\") pod \"busybox\" (UID: \"99a6b686-6484-4836-a66c-e292ed6386c7\") " pod="default/busybox"
	Nov 01 09:57:22 addons-407417 kubelet[1307]: I1101 09:57:22.140221    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5443933570000001 podStartE2EDuration="4.140202454s" podCreationTimestamp="2025-11-01 09:57:18 +0000 UTC" firstStartedPulling="2025-11-01 09:57:18.712352178 +0000 UTC m=+114.104722201" lastFinishedPulling="2025-11-01 09:57:21.308161282 +0000 UTC m=+116.700531298" observedRunningTime="2025-11-01 09:57:22.140188552 +0000 UTC m=+117.532558584" watchObservedRunningTime="2025-11-01 09:57:22.140202454 +0000 UTC m=+117.532572486"
	Nov 01 09:57:24 addons-407417 kubelet[1307]: I1101 09:57:24.683317    1307 scope.go:117] "RemoveContainer" containerID="0c24a2491e264922afa12178d8fd22bc4e65a7acb8ceda7e99c69ae18706c87e"
	Nov 01 09:57:24 addons-407417 kubelet[1307]: I1101 09:57:24.691579    1307 scope.go:117] "RemoveContainer" containerID="460c16796789fcd248661392a4ee4d72c92c68a83f3e690721ea255b09104d81"
	Nov 01 09:57:28 addons-407417 kubelet[1307]: E1101 09:57:28.674888    1307 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42206->127.0.0.1:46435: write tcp 127.0.0.1:42206->127.0.0.1:46435: write: broken pipe
	Nov 01 09:57:29 addons-407417 kubelet[1307]: I1101 09:57:29.362299    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1d37a5cb-fbba-4ff0-94a5-b78041de3ea1-gcp-creds\") pod \"helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386\" (UID: \"1d37a5cb-fbba-4ff0-94a5-b78041de3ea1\") " pod="local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386"
	Nov 01 09:57:29 addons-407417 kubelet[1307]: I1101 09:57:29.362356    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/1d37a5cb-fbba-4ff0-94a5-b78041de3ea1-data\") pod \"helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386\" (UID: \"1d37a5cb-fbba-4ff0-94a5-b78041de3ea1\") " pod="local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386"
	Nov 01 09:57:29 addons-407417 kubelet[1307]: I1101 09:57:29.362386    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/1d37a5cb-fbba-4ff0-94a5-b78041de3ea1-script\") pod \"helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386\" (UID: \"1d37a5cb-fbba-4ff0-94a5-b78041de3ea1\") " pod="local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386"
	Nov 01 09:57:29 addons-407417 kubelet[1307]: I1101 09:57:29.362553    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27csm\" (UniqueName: \"kubernetes.io/projected/1d37a5cb-fbba-4ff0-94a5-b78041de3ea1-kube-api-access-27csm\") pod \"helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386\" (UID: \"1d37a5cb-fbba-4ff0-94a5-b78041de3ea1\") " pod="local-path-storage/helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386"
	
	
	==> storage-provisioner [c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1] <==
	W1101 09:57:06.450935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:08.454367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:08.458347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:10.462094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:10.467137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:12.471032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:12.474782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:14.477705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:14.481378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:16.484171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:16.487870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:18.491176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:18.495354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:20.498147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:20.501839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:22.504819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:22.508513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:24.511257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:24.515083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:26.517656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:26.523439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:28.525980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:28.529890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:30.533208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:57:30.537792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-407417 -n addons-407417
helpers_test.go:269: (dbg) Run:  kubectl --context addons-407417 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path ingress-nginx-admission-create-hqmb4 ingress-nginx-admission-patch-ppfh2 registry-creds-764b6fb674-v2bwb helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-407417 describe pod test-local-path ingress-nginx-admission-create-hqmb4 ingress-nginx-admission-patch-ppfh2 registry-creds-764b6fb674-v2bwb helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-407417 describe pod test-local-path ingress-nginx-admission-create-hqmb4 ingress-nginx-admission-patch-ppfh2 registry-creds-764b6fb674-v2bwb helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386: exit status 1 (76.659719ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8f27k (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-8f27k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hqmb4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ppfh2" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-v2bwb" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-407417 describe pod test-local-path ingress-nginx-admission-create-hqmb4 ingress-nginx-admission-patch-ppfh2 registry-creds-764b6fb674-v2bwb helper-pod-create-pvc-82607fe7-5a15-4749-9c83-e78b928a7386: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable headlamp --alsologtostderr -v=1: exit status 11 (253.804833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:31.375167   72374 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:31.375456   72374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:31.375467   72374 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:31.375472   72374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:31.375744   72374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:31.376010   72374 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:31.376424   72374 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:31.376445   72374 addons.go:607] checking whether the cluster is paused
	I1101 09:57:31.376559   72374 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:31.376583   72374 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:31.376958   72374 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:31.394986   72374 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:31.395046   72374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:31.412269   72374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:31.511816   72374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:31.511925   72374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:31.539575   72374 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:31.539595   72374 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:31.539599   72374 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:31.539602   72374 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:31.539605   72374 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:31.539608   72374 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:31.539611   72374 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:31.539614   72374 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:31.539616   72374 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:31.539626   72374 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:31.539628   72374 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:31.539631   72374 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:31.539633   72374 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:31.539636   72374 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:31.539638   72374 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:31.539643   72374 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:31.539645   72374 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:31.539649   72374 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:31.539652   72374 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:31.539654   72374 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:31.539657   72374 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:31.539659   72374 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:31.539661   72374 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:31.539673   72374 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:31.539678   72374 cri.go:89] found id: ""
	I1101 09:57:31.539717   72374 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:31.552931   72374 out.go:203] 
	W1101 09:57:31.554048   72374 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:31.554066   72374 out.go:285] * 
	* 
	W1101 09:57:31.558030   72374 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:31.559238   72374 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-rvb7g" [d0a3458f-d070-40e7-a8fc-887a2a278418] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0032776s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (239.984665ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:36.625632   72653 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:36.625900   72653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:36.625911   72653 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:36.625915   72653 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:36.626107   72653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:36.626373   72653 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:36.626720   72653 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:36.626736   72653 addons.go:607] checking whether the cluster is paused
	I1101 09:57:36.626817   72653 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:36.626832   72653 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:36.627160   72653 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:36.643681   72653 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:36.643733   72653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:36.660245   72653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:36.757986   72653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:36.758071   72653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:36.785972   72653 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:36.785996   72653 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:36.786001   72653 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:36.786006   72653 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:36.786011   72653 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:36.786018   72653 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:36.786022   72653 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:36.786027   72653 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:36.786032   72653 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:36.786039   72653 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:36.786043   72653 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:36.786047   72653 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:36.786052   72653 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:36.786061   72653 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:36.786065   72653 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:36.786072   72653 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:36.786075   72653 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:36.786078   72653 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:36.786081   72653 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:36.786083   72653 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:36.786086   72653 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:36.786088   72653 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:36.786090   72653 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:36.786093   72653 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:36.786095   72653 cri.go:89] found id: ""
	I1101 09:57:36.786131   72653 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:36.799739   72653 out.go:203] 
	W1101 09:57:36.800855   72653 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:36.800872   72653 out.go:285] * 
	* 
	W1101 09:57:36.806127   72653 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:36.807347   72653 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-407417 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-407417 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-407417 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [a3ad081c-cc53-4264-9981-1cb542779330] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [a3ad081c-cc53-4264-9981-1cb542779330] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [a3ad081c-cc53-4264-9981-1cb542779330] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002708643s
addons_test.go:967: (dbg) Run:  kubectl --context addons-407417 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 ssh "cat /opt/local-path-provisioner/pvc-82607fe7-5a15-4749-9c83-e78b928a7386_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-407417 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-407417 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (250.949275ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:38.835308   72902 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:38.835450   72902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:38.835464   72902 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:38.835470   72902 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:38.835682   72902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:38.835996   72902 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:38.836345   72902 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:38.836363   72902 addons.go:607] checking whether the cluster is paused
	I1101 09:57:38.836459   72902 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:38.836481   72902 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:38.836929   72902 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:38.853308   72902 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:38.853368   72902 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:38.869946   72902 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:38.969872   72902 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:38.969993   72902 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:39.003441   72902 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:39.003465   72902 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:39.003471   72902 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:39.003486   72902 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:39.003504   72902 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:39.003510   72902 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:39.003514   72902 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:39.003519   72902 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:39.003523   72902 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:39.003534   72902 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:39.003552   72902 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:39.003560   72902 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:39.003564   72902 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:39.003568   72902 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:39.003575   72902 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:39.003586   72902 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:39.003593   72902 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:39.003603   72902 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:39.003607   72902 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:39.003615   72902 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:39.003622   72902 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:39.003629   72902 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:39.003634   72902 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:39.003640   72902 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:39.003644   72902 cri.go:89] found id: ""
	I1101 09:57:39.003687   72902 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:39.019233   72902 out.go:203] 
	W1101 09:57:39.020366   72902 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:39.020391   72902 out.go:285] * 
	* 
	W1101 09:57:39.026019   72902 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:39.027206   72902 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-z5mvf" [2a31bc49-c837-4001-8e0c-1855ae5050fd] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003634162s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (243.949014ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:33.979162   72490 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:33.979447   72490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:33.979458   72490 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:33.979463   72490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:33.979683   72490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:33.979922   72490 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:33.980256   72490 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:33.980271   72490 addons.go:607] checking whether the cluster is paused
	I1101 09:57:33.980353   72490 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:33.980368   72490 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:33.980750   72490 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:33.997219   72490 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:33.997258   72490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:34.014880   72490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:34.113095   72490 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:34.113216   72490 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:34.141784   72490 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:34.141804   72490 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:34.141808   72490 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:34.141811   72490 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:34.141814   72490 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:34.141817   72490 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:34.141820   72490 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:34.141822   72490 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:34.141825   72490 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:34.141834   72490 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:34.141838   72490 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:34.141842   72490 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:34.141847   72490 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:34.141851   72490 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:34.141855   72490 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:34.141875   72490 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:34.141883   72490 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:34.141888   72490 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:34.141890   72490 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:34.141892   72490 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:34.141895   72490 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:34.141897   72490 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:34.141899   72490 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:34.141902   72490 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:34.141904   72490 cri.go:89] found id: ""
	I1101 09:57:34.141943   72490 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:34.155636   72490 out.go:203] 
	W1101 09:57:34.156728   72490 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:34.156750   72490 out.go:285] * 
	* 
	W1101 09:57:34.160742   72490 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:34.161906   72490 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-7p2rl" [77a17c08-6cf8-424a-a2b8-572198dbade2] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0029404s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable yakd --alsologtostderr -v=1: exit status 11 (241.41075ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:49.842839   74496 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:49.843121   74496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:49.843133   74496 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:49.843138   74496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:49.843341   74496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:49.843658   74496 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:49.844015   74496 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:49.844035   74496 addons.go:607] checking whether the cluster is paused
	I1101 09:57:49.844140   74496 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:49.844162   74496 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:49.844609   74496 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:49.862021   74496 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:49.862068   74496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:49.879218   74496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:49.979791   74496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:49.979877   74496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:50.008285   74496 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:50.008305   74496 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:50.008308   74496 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:50.008314   74496 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:50.008316   74496 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:50.008319   74496 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:50.008322   74496 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:50.008324   74496 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:50.008327   74496 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:50.008332   74496 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:50.008334   74496 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:50.008337   74496 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:50.008339   74496 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:50.008342   74496 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:50.008345   74496 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:50.008351   74496 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:50.008357   74496 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:50.008361   74496 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:50.008364   74496 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:50.008366   74496 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:50.008369   74496 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:50.008371   74496 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:50.008373   74496 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:50.008375   74496 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:50.008378   74496 cri.go:89] found id: ""
	I1101 09:57:50.008412   74496 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:50.022158   74496 out.go:203] 
	W1101 09:57:50.023112   74496 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:50.023131   74496 out.go:285] * 
	* 
	W1101 09:57:50.027333   74496 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:50.028362   74496 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-f46dd" [a6f5f39e-d94a-44ab-bb1f-1030e866f7e6] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003900615s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-407417 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-407417 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (241.581031ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:48.760140   74340 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:48.760411   74340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:48.760420   74340 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:48.760425   74340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:48.760663   74340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:57:48.760950   74340 mustload.go:66] Loading cluster: addons-407417
	I1101 09:57:48.761286   74340 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:48.761301   74340 addons.go:607] checking whether the cluster is paused
	I1101 09:57:48.761380   74340 config.go:182] Loaded profile config "addons-407417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:48.761395   74340 host.go:66] Checking if "addons-407417" exists ...
	I1101 09:57:48.761739   74340 cli_runner.go:164] Run: docker container inspect addons-407417 --format={{.State.Status}}
	I1101 09:57:48.778341   74340 ssh_runner.go:195] Run: systemctl --version
	I1101 09:57:48.778387   74340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-407417
	I1101 09:57:48.795850   74340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/addons-407417/id_rsa Username:docker}
	I1101 09:57:48.893124   74340 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:57:48.893236   74340 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:57:48.921260   74340 cri.go:89] found id: "f08090ded8635153b9ffcea01293f8c9b712369f9852199c14027150bc2c8568"
	I1101 09:57:48.921280   74340 cri.go:89] found id: "4a12913519788234a4cebf2bbfa5df41df487f96195ef52e7591320824b8453d"
	I1101 09:57:48.921284   74340 cri.go:89] found id: "fa059e5944f6d47507d35ebca9c39a53e207b47118e8f8b5447023b097dd56f0"
	I1101 09:57:48.921287   74340 cri.go:89] found id: "fa38ee36042b142a53460ae282092cc534abe4690b6c00548c8b3d7e710116e2"
	I1101 09:57:48.921290   74340 cri.go:89] found id: "e2fa965dde20e4a6284e77727c858ff80292b2d8440bc29ec6d16c1e4ccf162d"
	I1101 09:57:48.921293   74340 cri.go:89] found id: "26ca487996d46937fb59e9a89abc3bcaae3b1169a93faaab50673052e86bfe4e"
	I1101 09:57:48.921295   74340 cri.go:89] found id: "209034ab12f223708c370ed4d1ba5886df5e9685ef0496d6cb7544238ca9a2dd"
	I1101 09:57:48.921298   74340 cri.go:89] found id: "bddb1deaf2b509f5acbdb1a864b5b18786577d85c2a12bcba17f10d3ff4bdeaf"
	I1101 09:57:48.921300   74340 cri.go:89] found id: "febf4ba9fa4880d64efabde007b14f87919bff9c2f8ad237fcde7fbb068be442"
	I1101 09:57:48.921306   74340 cri.go:89] found id: "3901514c12896315f0f4552975763bf813b05237a92aaf25b8f0251f96a7b15f"
	I1101 09:57:48.921309   74340 cri.go:89] found id: "dea585cf0fda561d201b27bc0c6f52b73b2a944e18939c068c57bd3d24ff0b34"
	I1101 09:57:48.921331   74340 cri.go:89] found id: "1a43d3e93f88ab7c0c7d3cb7634810926b13994e05f667a79b397dcb1935c123"
	I1101 09:57:48.921336   74340 cri.go:89] found id: "f1cbdd3dea0c8a045ac2e14e7c36966c39562ea88f5772e6bb492c66546d6430"
	I1101 09:57:48.921339   74340 cri.go:89] found id: "e9ee42459c8cc8dfc4e8a8441a33f6df003061fe7d76b5cc16665e638b787896"
	I1101 09:57:48.921343   74340 cri.go:89] found id: "c21e111d12956777260739b19c96561ea07263810656bca7539f17d343367219"
	I1101 09:57:48.921350   74340 cri.go:89] found id: "ee71e1d3f20be0c2899b1c947b1b6fc862762b8ac9d663d4ffc595c688ee8394"
	I1101 09:57:48.921358   74340 cri.go:89] found id: "1800341cdaf4e78b77e01ebf566cd690541871e5197ac6434162e64a6dd1f5a2"
	I1101 09:57:48.921363   74340 cri.go:89] found id: "c652bc696ccca333f7ba9a78ab6d06752c005f5ac8872c7c7a4f66f40a6b3dc1"
	I1101 09:57:48.921366   74340 cri.go:89] found id: "3e48e42054985f0910c9ce8a1c203fa737f0fd4df11f37f379d7fedb6064e3ab"
	I1101 09:57:48.921369   74340 cri.go:89] found id: "93000561a31e894cf399b3494532379ee974f4f58c8e9e38325db99a0b16b265"
	I1101 09:57:48.921371   74340 cri.go:89] found id: "b02d1d64a55b772ab8db89e4dd62afe5be8f12ab3368841e52e8c3c88e9e0be2"
	I1101 09:57:48.921373   74340 cri.go:89] found id: "7f28a4faf388877aff8cf43e90f86db953abcf0e0ae7058c24bb93a65bbb5f45"
	I1101 09:57:48.921376   74340 cri.go:89] found id: "6aaf19e53fbb2813bb98ca0288f7f91941de6b8905f40f3c635d18a1deabadda"
	I1101 09:57:48.921378   74340 cri.go:89] found id: "4d0958fc37fb7ac9c43e6e860959fe65d3ef69246199b92ed82b536cc001a9b9"
	I1101 09:57:48.921380   74340 cri.go:89] found id: ""
	I1101 09:57:48.921420   74340 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:57:48.935806   74340 out.go:203] 
	W1101 09:57:48.937066   74340 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:57:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:57:48.937090   74340 out.go:285] * 
	* 
	W1101 09:57:48.941142   74340 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:57:48.942421   74340 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-407417 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-638125 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-638125 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-d764l" [9b8ed837-d4a6-4d31-8090-a9b67c975af2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-638125 -n functional-638125
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-01 10:13:11.733102119 +0000 UTC m=+1133.332376994
functional_test.go:1645: (dbg) Run:  kubectl --context functional-638125 describe po hello-node-connect-7d85dfc575-d764l -n default
functional_test.go:1645: (dbg) kubectl --context functional-638125 describe po hello-node-connect-7d85dfc575-d764l -n default:
Name:             hello-node-connect-7d85dfc575-d764l
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-638125/192.168.49.2
Start Time:       Sat, 01 Nov 2025 10:03:11 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9sp5v (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9sp5v:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-d764l to functional-638125
Warning  Failed     7m14s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m14s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x19 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Normal   Pulling    4m33s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-638125 logs hello-node-connect-7d85dfc575-d764l -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-638125 logs hello-node-connect-7d85dfc575-d764l -n default: exit status 1 (67.126248ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-d764l" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-638125 logs hello-node-connect-7d85dfc575-d764l -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-638125 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-d764l
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-638125/192.168.49.2
Start Time:       Sat, 01 Nov 2025 10:03:11 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9sp5v (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9sp5v:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-d764l to functional-638125
Warning  Failed     7m14s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m14s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x19 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Normal   Pulling    4m33s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-638125 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-638125 logs -l app=hello-node-connect: exit status 1 (63.287915ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-d764l" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-638125 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-638125 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.3.182
IPs:                      10.96.3.182
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32469/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-638125
helpers_test.go:243: (dbg) docker inspect functional-638125:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37c82e1d7be4a0e51221d7ed6909591144abc7ba131631e4e4f29d69c882bb1c",
	        "Created": "2025-11-01T10:01:32.133577784Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 85809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:01:32.164665483Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/37c82e1d7be4a0e51221d7ed6909591144abc7ba131631e4e4f29d69c882bb1c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37c82e1d7be4a0e51221d7ed6909591144abc7ba131631e4e4f29d69c882bb1c/hostname",
	        "HostsPath": "/var/lib/docker/containers/37c82e1d7be4a0e51221d7ed6909591144abc7ba131631e4e4f29d69c882bb1c/hosts",
	        "LogPath": "/var/lib/docker/containers/37c82e1d7be4a0e51221d7ed6909591144abc7ba131631e4e4f29d69c882bb1c/37c82e1d7be4a0e51221d7ed6909591144abc7ba131631e4e4f29d69c882bb1c-json.log",
	        "Name": "/functional-638125",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-638125:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-638125",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37c82e1d7be4a0e51221d7ed6909591144abc7ba131631e4e4f29d69c882bb1c",
	                "LowerDir": "/var/lib/docker/overlay2/3907dcdb8675a3afeb1e225f1428b07cc5e1a2088b484898c559ed5a2f8aa2e1-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3907dcdb8675a3afeb1e225f1428b07cc5e1a2088b484898c559ed5a2f8aa2e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3907dcdb8675a3afeb1e225f1428b07cc5e1a2088b484898c559ed5a2f8aa2e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3907dcdb8675a3afeb1e225f1428b07cc5e1a2088b484898c559ed5a2f8aa2e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-638125",
	                "Source": "/var/lib/docker/volumes/functional-638125/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-638125",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-638125",
	                "name.minikube.sigs.k8s.io": "functional-638125",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08b2c75087a5bd2a6656cce9817854f1c1e16c18a8eb3a2c12f441183f38322c",
	            "SandboxKey": "/var/run/docker/netns/08b2c75087a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-638125": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:50:15:1d:b1:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dd9192637f5b9540ae2ed1e86a380795cd18aa292c30ec4a83628893fb4b394a",
	                    "EndpointID": "6cc703801743171fd2a754deb47fd28c6ebe23b7ad069415c723e13f7d930295",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-638125",
	                        "37c82e1d7be4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-638125 -n functional-638125
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-638125 logs -n 25: (1.23983278s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-638125 ssh sudo umount -f /mount-9p                                                                    │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ ssh            │ functional-638125 ssh findmnt -T /mount1                                                                          │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ mount          │ -p functional-638125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup137166997/001:/mount2 --alsologtostderr -v=1 │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ mount          │ -p functional-638125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup137166997/001:/mount1 --alsologtostderr -v=1 │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ mount          │ -p functional-638125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup137166997/001:/mount3 --alsologtostderr -v=1 │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ ssh            │ functional-638125 ssh findmnt -T /mount1                                                                          │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ ssh            │ functional-638125 ssh findmnt -T /mount2                                                                          │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ ssh            │ functional-638125 ssh findmnt -T /mount3                                                                          │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ mount          │ -p functional-638125 --kill=true                                                                                  │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ start          │ -p functional-638125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ start          │ -p functional-638125 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                   │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ ssh            │ functional-638125 ssh sudo systemctl is-active docker                                                             │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ ssh            │ functional-638125 ssh sudo systemctl is-active containerd                                                         │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ start          │ -p functional-638125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-638125 --alsologtostderr -v=1                                                    │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ update-context │ functional-638125 update-context --alsologtostderr -v=2                                                           │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ update-context │ functional-638125 update-context --alsologtostderr -v=2                                                           │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ update-context │ functional-638125 update-context --alsologtostderr -v=2                                                           │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ image          │ functional-638125 image ls --format short --alsologtostderr                                                       │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ image          │ functional-638125 image ls --format json --alsologtostderr                                                        │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ image          │ functional-638125 image ls --format table --alsologtostderr                                                       │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ image          │ functional-638125 image ls --format yaml --alsologtostderr                                                        │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ ssh            │ functional-638125 ssh pgrep buildkitd                                                                             │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │                     │
	│ image          │ functional-638125 image build -t localhost/my-image:functional-638125 testdata/build --alsologtostderr            │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	│ image          │ functional-638125 image ls                                                                                        │ functional-638125 │ jenkins │ v1.37.0 │ 01 Nov 25 10:03 UTC │ 01 Nov 25 10:03 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:03:43
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:03:43.384699  101155 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:03:43.384958  101155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:43.384969  101155 out.go:374] Setting ErrFile to fd 2...
	I1101 10:03:43.384974  101155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:43.385307  101155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:03:43.385746  101155 out.go:368] Setting JSON to false
	I1101 10:03:43.386785  101155 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6363,"bootTime":1761985060,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:03:43.386878  101155 start.go:143] virtualization: kvm guest
	I1101 10:03:43.388829  101155 out.go:179] * [functional-638125] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 10:03:43.389971  101155 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:03:43.390003  101155 notify.go:221] Checking for updates...
	I1101 10:03:43.391947  101155 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:03:43.392981  101155 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:03:43.394009  101155 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:03:43.395006  101155 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:03:43.396315  101155 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:03:43.397932  101155 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:03:43.398696  101155 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:03:43.422410  101155 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:03:43.422569  101155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:03:43.478341  101155 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:03:43.469009252 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:03:43.478440  101155 docker.go:319] overlay module found
	I1101 10:03:43.480045  101155 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 10:03:43.481111  101155 start.go:309] selected driver: docker
	I1101 10:03:43.481125  101155 start.go:930] validating driver "docker" against &{Name:functional-638125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-638125 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:03:43.481211  101155 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:03:43.482896  101155 out.go:203] 
	W1101 10:03:43.483916  101155 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 10:03:43.484925  101155 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 10:03:48 functional-638125 crio[3588]: time="2025-11-01T10:03:48.762738582Z" level=info msg="Stopping pod sandbox: abb28f8085fbd0e3c2c7a2e1aab9c4e2399c5f6e34d771061dadac440ac5f3c5" id=df91e10a-42e8-4e3c-917c-8f988904b339 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:03:48 functional-638125 crio[3588]: time="2025-11-01T10:03:48.76278859Z" level=info msg="Stopped pod sandbox (already stopped): abb28f8085fbd0e3c2c7a2e1aab9c4e2399c5f6e34d771061dadac440ac5f3c5" id=df91e10a-42e8-4e3c-917c-8f988904b339 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 10:03:48 functional-638125 crio[3588]: time="2025-11-01T10:03:48.763162192Z" level=info msg="Removing pod sandbox: abb28f8085fbd0e3c2c7a2e1aab9c4e2399c5f6e34d771061dadac440ac5f3c5" id=d43de0a0-3e41-4b83-82f7-06cbda13fab7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 10:03:48 functional-638125 crio[3588]: time="2025-11-01T10:03:48.766069772Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:03:48 functional-638125 crio[3588]: time="2025-11-01T10:03:48.766117177Z" level=info msg="Removed pod sandbox: abb28f8085fbd0e3c2c7a2e1aab9c4e2399c5f6e34d771061dadac440ac5f3c5" id=d43de0a0-3e41-4b83-82f7-06cbda13fab7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.413374971Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=59eeb3a7-27e1-437c-b06b-950ff7a7762e name=/runtime.v1.ImageService/PullImage
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.41405912Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=c7406e85-e848-40f7-b7a3-7dfd99ec9020 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.420295855Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=549a047d-7ef9-487a-9338-2e35fcaec18f name=/runtime.v1.ImageService/PullImage
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.420669404Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=1339e972-4152-4cb0-a321-53777f9251db name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.432029014Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mkcp/kubernetes-dashboard" id=3b4a1f75-b8d9-4857-af72-291cefcbaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.432167008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.438673199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.438898828Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2af01fa765dab5bb6c8045a60926fd3d9f2ab5cacdf08f92628736610176ed16/merged/etc/group: no such file or directory"
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.439266233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.459211172Z" level=info msg="Created container 1395ede9754fa88631a742f02a41dcce1faeb14b3d2786a6ac9deb44e1a1c94c: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mkcp/kubernetes-dashboard" id=3b4a1f75-b8d9-4857-af72-291cefcbaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.459866084Z" level=info msg="Starting container: 1395ede9754fa88631a742f02a41dcce1faeb14b3d2786a6ac9deb44e1a1c94c" id=459a04bd-ae9b-4d91-9e6a-d2f7e51d8034 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:03:50 functional-638125 crio[3588]: time="2025-11-01T10:03:50.461681384Z" level=info msg="Started container" PID=7650 containerID=1395ede9754fa88631a742f02a41dcce1faeb14b3d2786a6ac9deb44e1a1c94c description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4mkcp/kubernetes-dashboard id=459a04bd-ae9b-4d91-9e6a-d2f7e51d8034 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fffe3241e4fe4fbb91941f3e500c2b5d7ea5827c8e1053f77e8eac05b3b0841f
	Nov 01 10:03:53 functional-638125 crio[3588]: time="2025-11-01T10:03:53.768305085Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=865acf49-82c9-4f97-94d2-ec69ddd250c9 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:04:13 functional-638125 crio[3588]: time="2025-11-01T10:04:13.768115767Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f5bd9207-91aa-4c4a-a110-02707faf1265 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:04:35 functional-638125 crio[3588]: time="2025-11-01T10:04:35.768090946Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8bb00b74-52df-46e7-8f57-6baed0e76c81 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:05:03 functional-638125 crio[3588]: time="2025-11-01T10:05:03.768207553Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=02ba7b1c-fa87-487d-a13f-1da100744190 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:05:57 functional-638125 crio[3588]: time="2025-11-01T10:05:57.768541231Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7276e280-8e58-4695-b4da-491dd0566533 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:06:34 functional-638125 crio[3588]: time="2025-11-01T10:06:34.767711386Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4fe24c3f-b812-4977-b10f-9663a7c33e66 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:08:38 functional-638125 crio[3588]: time="2025-11-01T10:08:38.76822818Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8ae4d67b-e45e-48ff-86d9-ceeaa1b114e0 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:09:23 functional-638125 crio[3588]: time="2025-11-01T10:09:23.769851881Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7cb902f7-b532-4f3a-b1ca-9b691f1e96ed name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1395ede9754fa       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   fffe3241e4fe4       kubernetes-dashboard-855c9754f9-4mkcp        kubernetes-dashboard
	1697e86e7a682       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   a73a0f2083a2e       dashboard-metrics-scraper-77bf4d6c4c-pqjtc   kubernetes-dashboard
	4c66a64cc157f       docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58                  9 minutes ago       Running             myfrontend                  0                   434ed60dc4f76       sp-pod                                       default
	0c1dbdd02ddbd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   71c48d1555b18       busybox-mount                                default
	456aeca5bd7de       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   f2f5766d33616       nginx-svc                                    default
	685a5644c84f7       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   e5c715f4f86e5       mysql-5bb876957f-6j8zb                       default
	b9bd0248d8054       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   eba4ee1f319a4       kube-apiserver-functional-638125             kube-system
	c1dd1cf7e1365       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   338409d0e39f4       kube-controller-manager-functional-638125    kube-system
	173f113869e42       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   338409d0e39f4       kube-controller-manager-functional-638125    kube-system
	7785d20b169d9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   4db44570fa7f8       kube-scheduler-functional-638125             kube-system
	f87b8eb02a86f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   4d707393005f5       etcd-functional-638125                       kube-system
	b1f6e4a12d827       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   53ae80dbed854       kindnet-7fbf2                                kube-system
	ba002a0eae678       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   406a87afc3059       coredns-66bc5c9577-g6ck6                     kube-system
	2b9929e479fe0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   02c312ef67daa       storage-provisioner                          kube-system
	b49f5419ce103       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   2ead5df928116       kube-proxy-q8kzf                             kube-system
	ed1363b250270       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   406a87afc3059       coredns-66bc5c9577-g6ck6                     kube-system
	181353c5c950e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   02c312ef67daa       storage-provisioner                          kube-system
	a61a764df6799       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   2ead5df928116       kube-proxy-q8kzf                             kube-system
	80172f40fbaa5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   53ae80dbed854       kindnet-7fbf2                                kube-system
	391d0e0def7a7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   4d707393005f5       etcd-functional-638125                       kube-system
	ae193c2207493       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   4db44570fa7f8       kube-scheduler-functional-638125             kube-system
	
	
	==> coredns [ba002a0eae6786dff49d6ddbf14d272edcce6b644580645a0b00d0d538528f06] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54171 - 17006 "HINFO IN 8696091116554493583.93185210261538761. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.027738059s
	
	
	==> coredns [ed1363b25027020f52e90c5bef202c1139d5ba85cfe683197deb770caddc82ab] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43034 - 60844 "HINFO IN 4057990166599036177.159535900216554071. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.032988342s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-638125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-638125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=functional-638125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_01_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:01:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-638125
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:13:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:10:19 +0000   Sat, 01 Nov 2025 10:01:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:10:19 +0000   Sat, 01 Nov 2025 10:01:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:10:19 +0000   Sat, 01 Nov 2025 10:01:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:10:19 +0000   Sat, 01 Nov 2025 10:02:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-638125
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                1d5c6155-f7c1-421a-b0be-b6013cd8cbc9
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-vg67w                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  default                     hello-node-connect-7d85dfc575-d764l           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-6j8zb                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	  kube-system                 coredns-66bc5c9577-g6ck6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-638125                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-7fbf2                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-638125              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-638125     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-q8kzf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-638125              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-pqjtc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4mkcp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-638125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-638125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-638125 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-638125 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-638125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-638125 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-638125 event: Registered Node functional-638125 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-638125 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-638125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-638125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-638125 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-638125 event: Registered Node functional-638125 in Controller
	
	
	==> dmesg <==
	[  +0.077240] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.020831] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +6.657102] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 1 09:57] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.028293] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023905] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023938] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023934] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +2.047845] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[Nov 1 09:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +8.191344] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[ +16.382718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[ +32.253574] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	
	
	==> etcd [391d0e0def7a7b324eb9135802a4399e3927a9b0102fe613f0915eae1b0e44fa] <==
	{"level":"warn","ts":"2025-11-01T10:01:42.943155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:01:42.950396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:01:42.956738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:01:42.969714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:01:42.976208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:01:42.982708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:01:43.029884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60586","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:02:29.494963Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:02:29.495039Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-638125","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T10:02:29.495118Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:02:36.496381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:02:36.496481Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:02:36.496560Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-01T10:02:36.496558Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:02:36.496640Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:02:36.496655Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:02:36.496649Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:02:36.496684Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:02:36.496547Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:02:36.496709Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:02:36.496721Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:02:36.499186Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T10:02:36.499259Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:02:36.499293Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T10:02:36.499321Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-638125","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [f87b8eb02a86f9b69bed2cb051c778ae923e32b440e309ed03ce7d8c3406b563] <==
	{"level":"warn","ts":"2025-11-01T10:02:49.977771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:49.984371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:49.991400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:49.998657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.005054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.011961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.018112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.024055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.031740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.038125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.044900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.052354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.059286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.065281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.071030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.077118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.084014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.090725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.110833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.117708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.124990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:02:50.172519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58538","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:12:49.696502Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1167}
	{"level":"info","ts":"2025-11-01T10:12:49.714638Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1167,"took":"17.795178ms","hash":1529824271,"current-db-size-bytes":3559424,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1675264,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-11-01T10:12:49.714688Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1529824271,"revision":1167,"compact-revision":-1}
	
	
	==> kernel <==
	 10:13:13 up  1:55,  0 user,  load average: 0.18, 0.27, 0.95
	Linux functional-638125 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [80172f40fbaa5848d788139680805bafb45544da6b980ea33546fc77d399ac00] <==
	I1101 10:01:51.859993       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:01:51.860251       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1101 10:01:51.860388       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:01:51.860407       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:01:51.860432       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:01:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:01:52.156665       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:01:52.156695       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:01:52.156704       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:01:52.156849       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:01:52.456860       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:01:52.456885       1 metrics.go:72] Registering metrics
	I1101 10:01:52.456935       1 controller.go:711] "Syncing nftables rules"
	I1101 10:02:02.157710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:02:02.157795       1 main.go:301] handling current node
	I1101 10:02:12.159123       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:02:12.159225       1 main.go:301] handling current node
	I1101 10:02:22.160585       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:02:22.160624       1 main.go:301] handling current node
	
	
	==> kindnet [b1f6e4a12d8277b98abd707a1bde27573ec439158e7bfebcefcee2f2f1cfa783] <==
	I1101 10:11:10.190451       1 main.go:301] handling current node
	I1101 10:11:20.187605       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:11:20.187634       1 main.go:301] handling current node
	I1101 10:11:30.187524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:11:30.187590       1 main.go:301] handling current node
	I1101 10:11:40.187872       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:11:40.187918       1 main.go:301] handling current node
	I1101 10:11:50.195983       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:11:50.196020       1 main.go:301] handling current node
	I1101 10:12:00.187678       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:12:00.187712       1 main.go:301] handling current node
	I1101 10:12:10.189992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:12:10.190024       1 main.go:301] handling current node
	I1101 10:12:20.188441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:12:20.188490       1 main.go:301] handling current node
	I1101 10:12:30.187340       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:12:30.187381       1 main.go:301] handling current node
	I1101 10:12:40.187367       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:12:40.187398       1 main.go:301] handling current node
	I1101 10:12:50.187726       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:12:50.187781       1 main.go:301] handling current node
	I1101 10:13:00.192213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:13:00.192254       1 main.go:301] handling current node
	I1101 10:13:10.189457       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 10:13:10.189516       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b9bd0248d805479850a063792acaac7501304a0847e9a621a7241f4d72832986] <==
	I1101 10:02:50.664113       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:02:50.763993       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:02:51.534111       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1101 10:02:51.739439       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1101 10:02:51.740598       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:02:51.744483       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:02:52.105672       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:02:52.196339       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:02:52.239062       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:02:52.243723       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:03:07.374572       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.41.77"}
	I1101 10:03:11.319893       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:03:11.406389       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.3.182"}
	I1101 10:03:12.027161       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.232.123"}
	I1101 10:03:16.178316       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.29.69"}
	E1101 10:03:26.156044       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57728: use of closed network connection
	E1101 10:03:27.045527       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57758: use of closed network connection
	E1101 10:03:28.965762       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57790: use of closed network connection
	I1101 10:03:29.259661       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.55.58"}
	E1101 10:03:36.164550       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47106: use of closed network connection
	I1101 10:03:44.306461       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:03:44.403787       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.87.185"}
	I1101 10:03:44.428315       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.211.66"}
	E1101 10:03:45.153301       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51146: use of closed network connection
	I1101 10:12:50.571821       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [173f113869e427706562bd2205f5fd846cc9780ae940be69b2a1c090b1dbab23] <==
	I1101 10:02:38.357172       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:02:38.733549       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1101 10:02:38.733630       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:02:38.735305       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 10:02:38.735303       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:02:38.735623       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1101 10:02:38.735714       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:02:48.743463       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c1dd1cf7e1365887116d7e78e981014fa70e9592f9f6e004a7def9a83db0daaa] <==
	I1101 10:02:54.103436       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:02:54.103437       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:02:54.103567       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:02:54.103593       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:02:54.103597       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:02:54.103571       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:02:54.103636       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:02:54.103639       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:02:54.103657       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:02:54.103708       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:02:54.103859       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:02:54.104260       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:02:54.104296       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:02:54.104891       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:02:54.108294       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:02:54.109366       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:02:54.111433       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:02:54.117996       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:02:54.126951       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1101 10:03:44.348426       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:03:44.351782       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:03:44.355708       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:03:44.356776       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:03:44.358917       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:03:44.363807       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a61a764df6799e73daeb00de4271bedde4d6c2345dba54d1f5f061cb33e8cdaa] <==
	I1101 10:01:51.717270       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:01:51.782056       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:01:51.882681       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:01:51.882726       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 10:01:51.882852       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:01:51.903188       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:01:51.903247       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:01:51.908261       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:01:51.908641       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:01:51.908678       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:01:51.910438       1 config.go:309] "Starting node config controller"
	I1101 10:01:51.910792       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:01:51.910837       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:01:51.910778       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:01:51.910884       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:01:51.910753       1 config.go:200] "Starting service config controller"
	I1101 10:01:51.910965       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:01:51.910765       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:01:51.911041       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:01:52.011098       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:01:52.011122       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:01:52.011154       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [b49f5419ce10390ef43377e7eb5b658a86415500460d1cd54247f47200af180a] <==
	I1101 10:02:29.872732       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:02:29.932451       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:02:30.033273       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:02:30.033314       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 10:02:30.033413       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:02:30.051882       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:02:30.051927       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:02:30.057320       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:02:30.057813       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:02:30.057833       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:02:30.059071       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:02:30.059093       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:02:30.059076       1 config.go:200] "Starting service config controller"
	I1101 10:02:30.059138       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:02:30.059176       1 config.go:309] "Starting node config controller"
	I1101 10:02:30.059193       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:02:30.059204       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:02:30.059215       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:02:30.059222       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:02:30.159267       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:02:30.159312       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:02:30.159328       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7785d20b169d9eb1a9e80f9c08565d4c03212add63b07ddf90314aba9e2f7c38] <==
	I1101 10:02:37.433114       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:02:38.623613       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:02:38.623643       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:02:38.628481       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:02:38.628527       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:02:38.628527       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:02:38.628537       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:02:38.628550       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:02:38.628559       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:02:38.628933       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:02:38.629188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:02:38.729036       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:02:38.729037       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:02:38.729075       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E1101 10:02:50.556693       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:02:50.557267       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:02:50.558472       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:02:50.558571       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:02:50.558591       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:02:50.558606       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:02:50.558619       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:02:50.558634       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:02:50.558772       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	
	
	==> kube-scheduler [ae193c2207493c2f71acb9b591d9bcb047f201d7c1adf83cff78a6ae0602d9a4] <==
	E1101 10:01:43.438698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:01:43.438709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:01:43.438829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:01:43.438864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:01:43.438983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:01:43.438986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:01:43.439906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:01:43.440105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:01:43.440272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:01:44.277973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:01:44.293038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:01:44.297872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:01:44.373858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:01:44.407980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:01:44.423539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:01:44.483913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:01:44.528806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:01:44.661712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 10:01:47.430145       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:02:36.605070       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:02:36.605216       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:02:36.605270       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:02:36.605376       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:02:36.605422       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:02:36.605454       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:10:31 functional-638125 kubelet[4317]: E1101 10:10:31.767026    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:10:36 functional-638125 kubelet[4317]: E1101 10:10:36.767583    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:10:44 functional-638125 kubelet[4317]: E1101 10:10:44.767980    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:10:48 functional-638125 kubelet[4317]: E1101 10:10:48.768150    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:10:58 functional-638125 kubelet[4317]: E1101 10:10:58.769245    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:11:02 functional-638125 kubelet[4317]: E1101 10:11:02.767678    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:11:13 functional-638125 kubelet[4317]: E1101 10:11:13.767947    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:11:15 functional-638125 kubelet[4317]: E1101 10:11:15.767643    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:11:24 functional-638125 kubelet[4317]: E1101 10:11:24.767787    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:11:29 functional-638125 kubelet[4317]: E1101 10:11:29.767616    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:11:35 functional-638125 kubelet[4317]: E1101 10:11:35.767231    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:11:43 functional-638125 kubelet[4317]: E1101 10:11:43.767345    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:11:49 functional-638125 kubelet[4317]: E1101 10:11:49.767618    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:11:56 functional-638125 kubelet[4317]: E1101 10:11:56.767957    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:12:04 functional-638125 kubelet[4317]: E1101 10:12:04.767241    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:12:08 functional-638125 kubelet[4317]: E1101 10:12:08.767650    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:12:16 functional-638125 kubelet[4317]: E1101 10:12:16.767956    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:12:23 functional-638125 kubelet[4317]: E1101 10:12:23.766957    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:12:28 functional-638125 kubelet[4317]: E1101 10:12:28.767847    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:12:34 functional-638125 kubelet[4317]: E1101 10:12:34.767722    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:12:42 functional-638125 kubelet[4317]: E1101 10:12:42.767707    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:12:48 functional-638125 kubelet[4317]: E1101 10:12:48.768242    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:12:56 functional-638125 kubelet[4317]: E1101 10:12:56.767741    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	Nov 01 10:13:02 functional-638125 kubelet[4317]: E1101 10:13:02.767816    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-vg67w" podUID="42e0fdd6-77fb-4cd8-abbc-7b318962e8e8"
	Nov 01 10:13:10 functional-638125 kubelet[4317]: E1101 10:13:10.768154    4317 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-d764l" podUID="9b8ed837-d4a6-4d31-8090-a9b67c975af2"
	
	
	==> kubernetes-dashboard [1395ede9754fa88631a742f02a41dcce1faeb14b3d2786a6ac9deb44e1a1c94c] <==
	2025/11/01 10:03:50 Using namespace: kubernetes-dashboard
	2025/11/01 10:03:50 Using in-cluster config to connect to apiserver
	2025/11/01 10:03:50 Using secret token for csrf signing
	2025/11/01 10:03:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:03:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:03:50 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:03:50 Generating JWE encryption key
	2025/11/01 10:03:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:03:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:03:50 Initializing JWE encryption key from synchronized object
	2025/11/01 10:03:50 Creating in-cluster Sidecar client
	2025/11/01 10:03:50 Successful request to sidecar
	2025/11/01 10:03:50 Serving insecurely on HTTP port: 9090
	2025/11/01 10:03:50 Starting overwatch
	
	
	==> storage-provisioner [181353c5c950e8237fcdf9e63a8bc4ec4f4c00efd3eed68e2b723e2a8912f5ce] <==
	W1101 10:02:04.850040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:06.853556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:06.857107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:08.859837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:08.864051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:10.867341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:10.872418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:12.875897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:12.879715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:14.882430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:14.887417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:16.891123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:16.896177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:18.899450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:18.903278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:20.906812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:20.911018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:22.914285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:22.919324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:24.922459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:24.926539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:26.929378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:26.933139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:28.936011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:02:28.939735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [2b9929e479fe0db9df91b3655d41fe5e72b17ee3bed52f120e0944242eb9b376] <==
	W1101 10:12:48.488514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:50.491167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:50.494736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:52.497890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:52.502406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:54.505488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:54.509021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:56.511573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:56.515346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:58.518406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:12:58.523450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:00.526444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:00.530603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:02.533866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:02.539153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:04.542460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:04.546486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:06.549738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:06.555036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:08.558277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:08.561905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:10.565063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:10.569011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:12.572769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:13:12.576571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-638125 -n functional-638125
helpers_test.go:269: (dbg) Run:  kubectl --context functional-638125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-vg67w hello-node-connect-7d85dfc575-d764l
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-638125 describe pod busybox-mount hello-node-75c85bcc94-vg67w hello-node-connect-7d85dfc575-d764l
helpers_test.go:290: (dbg) kubectl --context functional-638125 describe pod busybox-mount hello-node-75c85bcc94-vg67w hello-node-connect-7d85dfc575-d764l:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-638125/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 10:03:32 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://0c1dbdd02ddbd680568c5a0bfbd7bef2bfe51d2810aa76f7e3813aaff6596f2f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 10:03:35 +0000
	      Finished:     Sat, 01 Nov 2025 10:03:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2jmb6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2jmb6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m41s  default-scheduler  Successfully assigned default/busybox-mount to functional-638125
	  Normal  Pulling    9m41s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m39s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.215s (2.215s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m39s  kubelet            Created container: mount-munger
	  Normal  Started    9m39s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-vg67w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-638125/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 10:03:29 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4m8jb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4m8jb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m44s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vg67w to functional-638125
	  Normal   Pulling    6m40s (x5 over 9m44s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m40s (x5 over 9m44s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m40s (x5 over 9m44s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m30s (x21 over 9m44s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m30s (x21 over 9m44s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-d764l
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-638125/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 10:03:11 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9sp5v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9sp5v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-d764l to functional-638125
	  Warning  Failed     7m17s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m17s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m2s (x19 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m51s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Normal   Pulling    4m36s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image load --daemon kicbase/echo-server:functional-638125 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-638125" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image load --daemon kicbase/echo-server:functional-638125 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-638125" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-638125
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image load --daemon kicbase/echo-server:functional-638125 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-638125 image ls: (2.240827939s)
functional_test.go:461: expected "kicbase/echo-server:functional-638125" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image save kicbase/echo-server:functional-638125 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1101 10:03:20.145156   96687 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:03:20.145476   96687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:20.145487   96687 out.go:374] Setting ErrFile to fd 2...
	I1101 10:03:20.145506   96687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:20.145726   96687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:03:20.146300   96687 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:03:20.146422   96687 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:03:20.146835   96687 cli_runner.go:164] Run: docker container inspect functional-638125 --format={{.State.Status}}
	I1101 10:03:20.164182   96687 ssh_runner.go:195] Run: systemctl --version
	I1101 10:03:20.164256   96687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638125
	I1101 10:03:20.181851   96687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/functional-638125/id_rsa Username:docker}
	I1101 10:03:20.281665   96687 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1101 10:03:20.281750   96687 cache_images.go:255] Failed to load cached images for "functional-638125": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1101 10:03:20.281780   96687 cache_images.go:267] failed pushing to: functional-638125

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-638125
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image save --daemon kicbase/echo-server:functional-638125 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-638125
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-638125: exit status 1 (16.803091ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-638125

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-638125

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-638125 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-638125 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-vg67w" [42e0fdd6-77fb-4cd8-abbc-7b318962e8e8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-638125 -n functional-638125
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 10:13:29.579720374 +0000 UTC m=+1151.178995228
functional_test.go:1460: (dbg) Run:  kubectl --context functional-638125 describe po hello-node-75c85bcc94-vg67w -n default
functional_test.go:1460: (dbg) kubectl --context functional-638125 describe po hello-node-75c85bcc94-vg67w -n default:
Name:             hello-node-75c85bcc94-vg67w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-638125/192.168.49.2
Start Time:       Sat, 01 Nov 2025 10:03:29 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4m8jb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4m8jb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vg67w to functional-638125
Normal   Pulling    6m55s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m45s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m45s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-638125 logs hello-node-75c85bcc94-vg67w -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-638125 logs hello-node-75c85bcc94-vg67w -n default: exit status 1 (60.085982ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-vg67w" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-638125 logs hello-node-75c85bcc94-vg67w -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 service --namespace=default --https --url hello-node: exit status 115 (531.166166ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30676
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-638125 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 service hello-node --url --format={{.IP}}: exit status 115 (533.031952ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-638125 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 service hello-node --url: exit status 115 (526.830774ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30676
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-638125 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30676
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.18s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-762511 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-762511 --output=json --user=testUser: exit status 80 (2.180725162s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0646c1d0-1cf2-4a9a-831b-94af7da495ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-762511 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"259b681f-1a7b-48db-9ff2-3dbc612da513","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T10:22:07Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"05977798-f1b2-4784-8551-3f1ffb685c22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-762511 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.18s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-762511 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-762511 --output=json --user=testUser: exit status 80 (1.682110962s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8f6cfc11-3a80-4f68-bbf5-5dc271831172","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-762511 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"979544f5-9f0b-4ec7-96c3-ac91c343e82b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T10:22:08Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"d7a332d5-8606-438f-871f-43daeba9fd63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-762511 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.68s)

                                                
                                    
x
+
TestPause/serial/Pause (7.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-405879 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-405879 --alsologtostderr -v=5: exit status 80 (2.667834474s)

                                                
                                                
-- stdout --
	* Pausing node pause-405879 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:36:15.818869  265173 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:36:15.819154  265173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:15.819167  265173 out.go:374] Setting ErrFile to fd 2...
	I1101 10:36:15.819174  265173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:15.819380  265173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:36:15.819664  265173 out.go:368] Setting JSON to false
	I1101 10:36:15.819696  265173 mustload.go:66] Loading cluster: pause-405879
	I1101 10:36:15.820078  265173 config.go:182] Loaded profile config "pause-405879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:15.820468  265173 cli_runner.go:164] Run: docker container inspect pause-405879 --format={{.State.Status}}
	I1101 10:36:15.841260  265173 host.go:66] Checking if "pause-405879" exists ...
	I1101 10:36:15.841634  265173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:15.901177  265173 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-01 10:36:15.890845444 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:36:15.901885  265173 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-405879 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:36:15.903766  265173 out.go:179] * Pausing node pause-405879 ... 
	I1101 10:36:15.904827  265173 host.go:66] Checking if "pause-405879" exists ...
	I1101 10:36:15.905136  265173 ssh_runner.go:195] Run: systemctl --version
	I1101 10:36:15.905184  265173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-405879
	I1101 10:36:15.923408  265173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/pause-405879/id_rsa Username:docker}
	I1101 10:36:16.022707  265173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:36:16.035872  265173 pause.go:52] kubelet running: true
	I1101 10:36:16.035940  265173 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:36:16.173466  265173 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:36:16.173583  265173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:36:16.240356  265173 cri.go:89] found id: "154497e4584307753fbc1163ec426cf44c3aaf91ffece1764468680d374d336d"
	I1101 10:36:16.240393  265173 cri.go:89] found id: "4e3ba8e1b76b5d8e7ea5eba9a2d45211f7050aba1cd11526c15684d155d31010"
	I1101 10:36:16.240399  265173 cri.go:89] found id: "0a6b9e58d85ab0c406893099cade3aa9de63e82cb5a62b8daad4a8fe01e95791"
	I1101 10:36:16.240405  265173 cri.go:89] found id: "0011544ba1ef753d375b43daa1e4a6000f7f2f19fbf46baa1f844bfc4c49d3d6"
	I1101 10:36:16.240409  265173 cri.go:89] found id: "3a8ad7e0e76341d07adeca6ac580feca170a71427f035e1d93c352ee61452bd3"
	I1101 10:36:16.240413  265173 cri.go:89] found id: "92360e8c9a9bca5fe2511249daf7d5d53794fc72ba17f3dc32d065e87322e39c"
	I1101 10:36:16.240416  265173 cri.go:89] found id: "08681f77a7ca0ad657ca55ebaeb0f2715715c3bb1e381d90f12bf0152c4b45ff"
	I1101 10:36:16.240420  265173 cri.go:89] found id: ""
	I1101 10:36:16.240486  265173 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:36:16.252636  265173 retry.go:31] will retry after 185.03297ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:36:16Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:36:16.438045  265173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:36:16.451948  265173 pause.go:52] kubelet running: false
	I1101 10:36:16.452002  265173 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:36:16.570401  265173 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:36:16.570508  265173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:36:16.640155  265173 cri.go:89] found id: "154497e4584307753fbc1163ec426cf44c3aaf91ffece1764468680d374d336d"
	I1101 10:36:16.640176  265173 cri.go:89] found id: "4e3ba8e1b76b5d8e7ea5eba9a2d45211f7050aba1cd11526c15684d155d31010"
	I1101 10:36:16.640181  265173 cri.go:89] found id: "0a6b9e58d85ab0c406893099cade3aa9de63e82cb5a62b8daad4a8fe01e95791"
	I1101 10:36:16.640184  265173 cri.go:89] found id: "0011544ba1ef753d375b43daa1e4a6000f7f2f19fbf46baa1f844bfc4c49d3d6"
	I1101 10:36:16.640186  265173 cri.go:89] found id: "3a8ad7e0e76341d07adeca6ac580feca170a71427f035e1d93c352ee61452bd3"
	I1101 10:36:16.640189  265173 cri.go:89] found id: "92360e8c9a9bca5fe2511249daf7d5d53794fc72ba17f3dc32d065e87322e39c"
	I1101 10:36:16.640192  265173 cri.go:89] found id: "08681f77a7ca0ad657ca55ebaeb0f2715715c3bb1e381d90f12bf0152c4b45ff"
	I1101 10:36:16.640194  265173 cri.go:89] found id: ""
	I1101 10:36:16.640238  265173 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:36:16.652756  265173 retry.go:31] will retry after 483.283878ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:36:16Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:36:17.136420  265173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:36:17.150709  265173 pause.go:52] kubelet running: false
	I1101 10:36:17.150777  265173 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:36:17.272824  265173 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:36:17.272910  265173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:36:17.347701  265173 cri.go:89] found id: "154497e4584307753fbc1163ec426cf44c3aaf91ffece1764468680d374d336d"
	I1101 10:36:17.347727  265173 cri.go:89] found id: "4e3ba8e1b76b5d8e7ea5eba9a2d45211f7050aba1cd11526c15684d155d31010"
	I1101 10:36:17.347734  265173 cri.go:89] found id: "0a6b9e58d85ab0c406893099cade3aa9de63e82cb5a62b8daad4a8fe01e95791"
	I1101 10:36:17.347739  265173 cri.go:89] found id: "0011544ba1ef753d375b43daa1e4a6000f7f2f19fbf46baa1f844bfc4c49d3d6"
	I1101 10:36:17.347743  265173 cri.go:89] found id: "3a8ad7e0e76341d07adeca6ac580feca170a71427f035e1d93c352ee61452bd3"
	I1101 10:36:17.347747  265173 cri.go:89] found id: "92360e8c9a9bca5fe2511249daf7d5d53794fc72ba17f3dc32d065e87322e39c"
	I1101 10:36:17.347751  265173 cri.go:89] found id: "08681f77a7ca0ad657ca55ebaeb0f2715715c3bb1e381d90f12bf0152c4b45ff"
	I1101 10:36:17.347755  265173 cri.go:89] found id: ""
	I1101 10:36:17.347792  265173 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:36:17.360748  265173 retry.go:31] will retry after 567.325881ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:36:17Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:36:17.928578  265173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:36:17.941740  265173 pause.go:52] kubelet running: false
	I1101 10:36:17.941805  265173 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:36:18.050620  265173 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:36:18.050703  265173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:36:18.116986  265173 cri.go:89] found id: "154497e4584307753fbc1163ec426cf44c3aaf91ffece1764468680d374d336d"
	I1101 10:36:18.117011  265173 cri.go:89] found id: "4e3ba8e1b76b5d8e7ea5eba9a2d45211f7050aba1cd11526c15684d155d31010"
	I1101 10:36:18.117014  265173 cri.go:89] found id: "0a6b9e58d85ab0c406893099cade3aa9de63e82cb5a62b8daad4a8fe01e95791"
	I1101 10:36:18.117018  265173 cri.go:89] found id: "0011544ba1ef753d375b43daa1e4a6000f7f2f19fbf46baa1f844bfc4c49d3d6"
	I1101 10:36:18.117020  265173 cri.go:89] found id: "3a8ad7e0e76341d07adeca6ac580feca170a71427f035e1d93c352ee61452bd3"
	I1101 10:36:18.117023  265173 cri.go:89] found id: "92360e8c9a9bca5fe2511249daf7d5d53794fc72ba17f3dc32d065e87322e39c"
	I1101 10:36:18.117025  265173 cri.go:89] found id: "08681f77a7ca0ad657ca55ebaeb0f2715715c3bb1e381d90f12bf0152c4b45ff"
	I1101 10:36:18.117027  265173 cri.go:89] found id: ""
	I1101 10:36:18.117064  265173 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:36:18.269123  265173 out.go:203] 
	W1101 10:36:18.274384  265173 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:36:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:36:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:36:18.274447  265173 out.go:285] * 
	* 
	W1101 10:36:18.279510  265173 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:36:18.401922  265173 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-405879 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-405879
helpers_test.go:243: (dbg) docker inspect pause-405879:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15",
	        "Created": "2025-11-01T10:35:35.045461825Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253486,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:35:35.08859762Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15/hostname",
	        "HostsPath": "/var/lib/docker/containers/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15/hosts",
	        "LogPath": "/var/lib/docker/containers/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15-json.log",
	        "Name": "/pause-405879",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-405879:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-405879",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15",
	                "LowerDir": "/var/lib/docker/overlay2/cd784938e042db5bb208f24b8074a7acb6bf96587a02e53de81b06ace65b5c8b-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd784938e042db5bb208f24b8074a7acb6bf96587a02e53de81b06ace65b5c8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd784938e042db5bb208f24b8074a7acb6bf96587a02e53de81b06ace65b5c8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd784938e042db5bb208f24b8074a7acb6bf96587a02e53de81b06ace65b5c8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-405879",
	                "Source": "/var/lib/docker/volumes/pause-405879/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-405879",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-405879",
	                "name.minikube.sigs.k8s.io": "pause-405879",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "502016ba2a712cef2cc32a37f29eb61571c3bb2e4b1c9a06eda7f92bb42fe3d5",
	            "SandboxKey": "/var/run/docker/netns/502016ba2a71",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-405879": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:fe:47:5f:55:ed",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6b76904196f7a013a507cd799387e87773503efa226265ce4c1e178b68026d9",
	                    "EndpointID": "7f121aeda7f05cddb4058b62ac6da378daa9ec896a261cb07e1547d40c6fab8b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-405879",
	                        "754e54d0295d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-405879 -n pause-405879
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-405879 -n pause-405879: exit status 2 (416.930729ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-405879 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-405879 logs -n 25: (1.741546091s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-299863 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cri-dockerd --version                                                                                 │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl cat containerd --no-pager                                                                   │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cat /etc/containerd/config.toml                                                                       │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo containerd config dump                                                                                │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl cat crio --no-pager                                                                         │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo crio config                                                                                           │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ delete  │ -p cilium-299863                                                                                                            │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:35 UTC │
	│ start   │ -p pause-405879 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-405879              │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ delete  │ -p missing-upgrade-834138                                                                                                   │ missing-upgrade-834138    │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:35 UTC │
	│ start   │ -p NoKubernetes-585638 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-585638       │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ start   │ -p NoKubernetes-585638 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-585638       │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p running-upgrade-376123 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ running-upgrade-376123    │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p NoKubernetes-585638 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-585638       │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ start   │ -p pause-405879 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-405879              │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ delete  │ -p running-upgrade-376123                                                                                                   │ running-upgrade-376123    │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p force-systemd-flag-841776 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-841776 │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ pause   │ -p pause-405879 --alsologtostderr -v=5                                                                                      │ pause-405879              │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:36:14
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:36:14.884609  264742 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:36:14.884919  264742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:14.884930  264742 out.go:374] Setting ErrFile to fd 2...
	I1101 10:36:14.884934  264742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:14.885127  264742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:36:14.886033  264742 out.go:368] Setting JSON to false
	I1101 10:36:14.887417  264742 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8315,"bootTime":1761985060,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:36:14.887548  264742 start.go:143] virtualization: kvm guest
	I1101 10:36:14.889306  264742 out.go:179] * [force-systemd-flag-841776] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:36:14.890558  264742 notify.go:221] Checking for updates...
	I1101 10:36:14.890575  264742 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:36:14.891842  264742 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:36:14.893143  264742 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:36:14.894263  264742 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:36:14.895514  264742 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:36:14.896588  264742 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:36:14.897992  264742 config.go:182] Loaded profile config "NoKubernetes-585638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 10:36:14.898091  264742 config.go:182] Loaded profile config "kubernetes-upgrade-896514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:14.898192  264742 config.go:182] Loaded profile config "pause-405879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:14.898285  264742 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:36:14.922071  264742 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:36:14.922157  264742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:14.976925  264742 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:36:14.967785642 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:36:14.977047  264742 docker.go:319] overlay module found
	I1101 10:36:14.978698  264742 out.go:179] * Using the docker driver based on user configuration
	I1101 10:36:10.962625  237182 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.055950297s)
	W1101 10:36:10.962678  237182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 10:36:10.962687  237182 logs.go:123] Gathering logs for kube-apiserver [dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c] ...
	I1101 10:36:10.962702  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c"
	I1101 10:36:11.002217  237182 logs.go:123] Gathering logs for kube-apiserver [36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2] ...
	I1101 10:36:11.002262  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2"
	I1101 10:36:13.547404  237182 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:36:13.915559  237182 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:32996->192.168.85.2:8443: read: connection reset by peer
	I1101 10:36:13.915630  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:36:13.915691  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:36:13.948777  237182 cri.go:89] found id: "dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c"
	I1101 10:36:13.948797  237182 cri.go:89] found id: "36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2"
	I1101 10:36:13.948800  237182 cri.go:89] found id: ""
	I1101 10:36:13.948808  237182 logs.go:282] 2 containers: [dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c 36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2]
	I1101 10:36:13.948861  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:13.952959  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:13.956647  237182 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:36:13.956724  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:36:13.985668  237182 cri.go:89] found id: ""
	I1101 10:36:13.985696  237182 logs.go:282] 0 containers: []
	W1101 10:36:13.985707  237182 logs.go:284] No container was found matching "etcd"
	I1101 10:36:13.985715  237182 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:36:13.985775  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:36:14.014814  237182 cri.go:89] found id: ""
	I1101 10:36:14.014843  237182 logs.go:282] 0 containers: []
	W1101 10:36:14.014853  237182 logs.go:284] No container was found matching "coredns"
	I1101 10:36:14.014860  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:36:14.014916  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:36:14.044314  237182 cri.go:89] found id: "2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798"
	I1101 10:36:14.044334  237182 cri.go:89] found id: ""
	I1101 10:36:14.044342  237182 logs.go:282] 1 containers: [2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798]
	I1101 10:36:14.044421  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:14.048361  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:36:14.048421  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:36:14.080747  237182 cri.go:89] found id: ""
	I1101 10:36:14.080768  237182 logs.go:282] 0 containers: []
	W1101 10:36:14.080775  237182 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:36:14.080793  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:36:14.080856  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:36:14.110056  237182 cri.go:89] found id: "1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f"
	I1101 10:36:14.110081  237182 cri.go:89] found id: "4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0"
	I1101 10:36:14.110086  237182 cri.go:89] found id: ""
	I1101 10:36:14.110098  237182 logs.go:282] 2 containers: [1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f 4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0]
	I1101 10:36:14.110159  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:14.114385  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:14.118191  237182 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:36:14.118237  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:36:14.144514  237182 cri.go:89] found id: ""
	I1101 10:36:14.144542  237182 logs.go:282] 0 containers: []
	W1101 10:36:14.144553  237182 logs.go:284] No container was found matching "kindnet"
	I1101 10:36:14.144562  237182 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:36:14.144610  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:36:14.172730  237182 cri.go:89] found id: ""
	I1101 10:36:14.172755  237182 logs.go:282] 0 containers: []
	W1101 10:36:14.172765  237182 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:36:14.172784  237182 logs.go:123] Gathering logs for dmesg ...
	I1101 10:36:14.172798  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:36:14.189706  237182 logs.go:123] Gathering logs for kube-apiserver [dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c] ...
	I1101 10:36:14.189739  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c"
	I1101 10:36:14.227034  237182 logs.go:123] Gathering logs for kube-scheduler [2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798] ...
	I1101 10:36:14.227064  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798"
	I1101 10:36:14.279143  237182 logs.go:123] Gathering logs for container status ...
	I1101 10:36:14.279180  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:36:14.310995  237182 logs.go:123] Gathering logs for kubelet ...
	I1101 10:36:14.311031  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:36:14.377622  237182 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:36:14.377658  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:36:14.435819  237182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:36:14.435841  237182 logs.go:123] Gathering logs for kube-apiserver [36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2] ...
	I1101 10:36:14.435858  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2"
	I1101 10:36:14.472004  237182 logs.go:123] Gathering logs for kube-controller-manager [1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f] ...
	I1101 10:36:14.472035  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f"
	I1101 10:36:14.500814  237182 logs.go:123] Gathering logs for kube-controller-manager [4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0] ...
	I1101 10:36:14.500841  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0"
	I1101 10:36:14.528080  237182 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:36:14.528132  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:36:14.979862  264742 start.go:309] selected driver: docker
	I1101 10:36:14.979874  264742 start.go:930] validating driver "docker" against <nil>
	I1101 10:36:14.979887  264742 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:36:14.980674  264742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:15.037369  264742 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:36:15.02814876 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:36:15.037595  264742 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:36:15.037811  264742 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 10:36:15.039421  264742 out.go:179] * Using Docker driver with root privileges
	I1101 10:36:15.040481  264742 cni.go:84] Creating CNI manager for ""
	I1101 10:36:15.040570  264742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:36:15.040583  264742 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:36:15.040649  264742 start.go:353] cluster config:
	{Name:force-systemd-flag-841776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-841776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:15.041960  264742 out.go:179] * Starting "force-systemd-flag-841776" primary control-plane node in "force-systemd-flag-841776" cluster
	I1101 10:36:15.043040  264742 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:36:15.044198  264742 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:36:15.045523  264742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:36:15.045554  264742 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:36:15.045570  264742 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:36:15.045583  264742 cache.go:59] Caching tarball of preloaded images
	I1101 10:36:15.045677  264742 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:36:15.045689  264742 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:36:15.045800  264742 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/force-systemd-flag-841776/config.json ...
	I1101 10:36:15.045825  264742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/force-systemd-flag-841776/config.json: {Name:mk7449200b91430e8697351111f6425117f6a5dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:15.065881  264742 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:36:15.065904  264742 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:36:15.065919  264742 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:36:15.065945  264742 start.go:360] acquireMachinesLock for force-systemd-flag-841776: {Name:mk1ee397c6ff1e52820690aa6db5f10820ff5978 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:15.066043  264742 start.go:364] duration metric: took 81.522µs to acquireMachinesLock for "force-systemd-flag-841776"
	I1101 10:36:15.066067  264742 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-841776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-841776 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:36:15.066133  264742 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:36:15.065327  262749 pod_ready.go:94] pod "kube-proxy-2s44m" is "Ready"
	I1101 10:36:15.065352  262749 pod_ready.go:86] duration metric: took 400.799728ms for pod "kube-proxy-2s44m" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:36:15.266262  262749 pod_ready.go:83] waiting for pod "kube-scheduler-pause-405879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:36:15.663863  262749 pod_ready.go:94] pod "kube-scheduler-pause-405879" is "Ready"
	I1101 10:36:15.663891  262749 pod_ready.go:86] duration metric: took 397.601074ms for pod "kube-scheduler-pause-405879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:36:15.663906  262749 pod_ready.go:40] duration metric: took 1.604203696s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:36:15.717681  262749 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:36:15.720442  262749 out.go:179] * Done! kubectl is now configured to use "pause-405879" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.697139927Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698056614Z" level=info msg="Conmon does support the --sync option"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698072807Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698086727Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698844395Z" level=info msg="Conmon does support the --sync option"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698863973Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.70283858Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.702860312Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.70334655Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hook
s.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_m
appings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.703830639Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.703902232Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.709776559Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.754077228Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-qksbx Namespace:kube-system ID:1bb50a34203cb1ebf1175e4b010b6658a3de28a37f1f41e30553b2a37349e54f UID:c86c63f4-74d0-46b2-b04c-725db887620d NetNS:/var/run/netns/1e9bdf9d-8326-4c27-a1be-e12ce72b403f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012c4b8}] Aliases:map[]}"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.754411538Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-qksbx for CNI network kindnet (type=ptp)"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755350989Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755380756Z" level=info msg="Starting seccomp notifier watcher"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755433441Z" level=info msg="Create NRI interface"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755554867Z" level=info msg="built-in NRI default validator is disabled"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.75556574Z" level=info msg="runtime interface created"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755578394Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755584584Z" level=info msg="runtime interface starting up..."
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755590047Z" level=info msg="starting plugins..."
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755600409Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755955363Z" level=info msg="No systemd watchdog enabled"
	Nov 01 10:36:12 pause-405879 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	154497e458430       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   1bb50a34203cb       coredns-66bc5c9577-qksbx               kube-system
	4e3ba8e1b76b5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   aefadb651706c       kube-proxy-2s44m                       kube-system
	0a6b9e58d85ab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   a9b62fc12cadd       kindnet-trqjm                          kube-system
	0011544ba1ef7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   2e592da1f15d9       etcd-pause-405879                      kube-system
	3a8ad7e0e7634       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   fa9df57d7f374       kube-controller-manager-pause-405879   kube-system
	92360e8c9a9bc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   52fab05af124c       kube-scheduler-pause-405879            kube-system
	08681f77a7ca0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   47ebaa8d7dfe0       kube-apiserver-pause-405879            kube-system
	
	
	==> coredns [154497e4584307753fbc1163ec426cf44c3aaf91ffece1764468680d374d336d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37294 - 26462 "HINFO IN 1829789769323189349.4354050200118358698. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032372635s
	
	
	==> describe nodes <==
	Name:               pause-405879
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-405879
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=pause-405879
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_35_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:35:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-405879
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:36:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:36:06 +0000   Sat, 01 Nov 2025 10:35:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:36:06 +0000   Sat, 01 Nov 2025 10:35:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:36:06 +0000   Sat, 01 Nov 2025 10:35:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:36:06 +0000   Sat, 01 Nov 2025 10:36:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-405879
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                85cefb4f-0cb0-45ec-95d1-f9d37de67ff4
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-qksbx                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-405879                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-trqjm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-405879             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-405879    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-2s44m                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-405879             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node pause-405879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node pause-405879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node pause-405879 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node pause-405879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node pause-405879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node pause-405879 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-405879 event: Registered Node pause-405879 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-405879 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 09:57] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.028293] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023905] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023938] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023934] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +2.047845] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[Nov 1 09:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +8.191344] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[ +16.382718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[ +32.253574] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	
	
	==> etcd [0011544ba1ef753d375b43daa1e4a6000f7f2f19fbf46baa1f844bfc4c49d3d6] <==
	{"level":"warn","ts":"2025-11-01T10:35:50.700090Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.996573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-01T10:35:50.700449Z","caller":"traceutil/trace.go:172","msg":"trace[1557822150] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:298; }","duration":"108.353737ms","start":"2025-11-01T10:35:50.592080Z","end":"2025-11-01T10:35:50.700434Z","steps":["trace[1557822150] 'agreement among raft nodes before linearized reading'  (duration: 107.932085ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:50.908803Z","caller":"traceutil/trace.go:172","msg":"trace[1541339381] linearizableReadLoop","detail":"{readStateIndex:309; appliedIndex:309; }","duration":"192.997047ms","start":"2025-11-01T10:35:50.715784Z","end":"2025-11-01T10:35:50.908781Z","steps":["trace[1541339381] 'read index received'  (duration: 192.98929ms)","trace[1541339381] 'applied index is now lower than readState.Index'  (duration: 6.193µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:35:50.908928Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.13029ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/job-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:35:50.908960Z","caller":"traceutil/trace.go:172","msg":"trace[89969182] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/job-controller; range_end:; response_count:0; response_revision:298; }","duration":"193.179223ms","start":"2025-11-01T10:35:50.715770Z","end":"2025-11-01T10:35:50.908949Z","steps":["trace[89969182] 'agreement among raft nodes before linearized reading'  (duration: 193.088872ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:50.909011Z","caller":"traceutil/trace.go:172","msg":"trace[1610482022] transaction","detail":"{read_only:false; response_revision:299; number_of_response:1; }","duration":"199.909457ms","start":"2025-11-01T10:35:50.709088Z","end":"2025-11-01T10:35:50.908997Z","steps":["trace[1610482022] 'process raft request'  (duration: 199.773037ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:35:51.165339Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.898693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/kindnet\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:35:51.165411Z","caller":"traceutil/trace.go:172","msg":"trace[1816080095] range","detail":"{range_begin:/registry/clusterroles/kindnet; range_end:; response_count:0; response_revision:299; }","duration":"166.972492ms","start":"2025-11-01T10:35:50.998415Z","end":"2025-11-01T10:35:51.165388Z","steps":["trace[1816080095] 'agreement among raft nodes before linearized reading'  (duration: 37.984659ms)","trace[1816080095] 'range keys from in-memory index tree'  (duration: 128.875395ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:35:51.165478Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.976301ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789722411185083 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/job-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/job-controller\" value_size:119 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:35:51.165661Z","caller":"traceutil/trace.go:172","msg":"trace[1570133166] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"251.623091ms","start":"2025-11-01T10:35:50.914023Z","end":"2025-11-01T10:35:51.165646Z","steps":["trace[1570133166] 'process raft request'  (duration: 122.429953ms)","trace[1570133166] 'compare'  (duration: 128.86411ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:35:51.165665Z","caller":"traceutil/trace.go:172","msg":"trace[1948869252] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"247.892911ms","start":"2025-11-01T10:35:50.917762Z","end":"2025-11-01T10:35:51.165655Z","steps":["trace[1948869252] 'process raft request'  (duration: 247.811843ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:35:51.480887Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.2514ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789722411185100 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:35:51.481081Z","caller":"traceutil/trace.go:172","msg":"trace[1792339761] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"276.539161ms","start":"2025-11-01T10:35:51.204521Z","end":"2025-11-01T10:35:51.481060Z","steps":["trace[1792339761] 'process raft request'  (duration: 126.062487ms)","trace[1792339761] 'compare'  (duration: 150.144217ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:35:51.481160Z","caller":"traceutil/trace.go:172","msg":"trace[776775587] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"273.777384ms","start":"2025-11-01T10:35:51.207372Z","end":"2025-11-01T10:35:51.481149Z","steps":["trace[776775587] 'process raft request'  (duration: 273.724763ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.481373Z","caller":"traceutil/trace.go:172","msg":"trace[42816039] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"276.583673ms","start":"2025-11-01T10:35:51.204779Z","end":"2025-11-01T10:35:51.481363Z","steps":["trace[42816039] 'process raft request'  (duration: 276.219847ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.592425Z","caller":"traceutil/trace.go:172","msg":"trace[1913520560] transaction","detail":"{read_only:false; response_revision:309; number_of_response:1; }","duration":"104.334919ms","start":"2025-11-01T10:35:51.488070Z","end":"2025-11-01T10:35:51.592405Z","steps":["trace[1913520560] 'process raft request'  (duration: 96.393926ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.594960Z","caller":"traceutil/trace.go:172","msg":"trace[158421973] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"104.327293ms","start":"2025-11-01T10:35:51.490620Z","end":"2025-11-01T10:35:51.594947Z","steps":["trace[158421973] 'process raft request'  (duration: 104.282948ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.594990Z","caller":"traceutil/trace.go:172","msg":"trace[1030127544] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"104.961879ms","start":"2025-11-01T10:35:51.490017Z","end":"2025-11-01T10:35:51.594979Z","steps":["trace[1030127544] 'process raft request'  (duration: 104.789428ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:35:51.865156Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.768573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-405879\" limit:1 ","response":"range_response_count:1 size:4812"}
	{"level":"info","ts":"2025-11-01T10:35:51.865217Z","caller":"traceutil/trace.go:172","msg":"trace[969886107] range","detail":"{range_begin:/registry/minions/pause-405879; range_end:; response_count:1; response_revision:312; }","duration":"197.840038ms","start":"2025-11-01T10:35:51.667362Z","end":"2025-11-01T10:35:51.865202Z","steps":["trace[969886107] 'agreement among raft nodes before linearized reading'  (duration: 82.252166ms)","trace[969886107] 'range keys from in-memory index tree'  (duration: 115.412116ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:35:51.865542Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.505636ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789722411185115 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-405879\" mod_revision:301 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-405879\" value_size:7813 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-405879\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:35:51.865688Z","caller":"traceutil/trace.go:172","msg":"trace[125784908] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"260.915083ms","start":"2025-11-01T10:35:51.604755Z","end":"2025-11-01T10:35:51.865670Z","steps":["trace[125784908] 'process raft request'  (duration: 144.908764ms)","trace[125784908] 'compare'  (duration: 115.388583ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:35:51.865710Z","caller":"traceutil/trace.go:172","msg":"trace[1875747674] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"200.538874ms","start":"2025-11-01T10:35:51.665161Z","end":"2025-11-01T10:35:51.865700Z","steps":["trace[1875747674] 'process raft request'  (duration: 200.504598ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.865751Z","caller":"traceutil/trace.go:172","msg":"trace[1856103689] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"259.036254ms","start":"2025-11-01T10:35:51.606700Z","end":"2025-11-01T10:35:51.865737Z","steps":["trace[1856103689] 'process raft request'  (duration: 258.904907ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:36:18.992824Z","caller":"traceutil/trace.go:172","msg":"trace[1475105633] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"144.179995ms","start":"2025-11-01T10:36:18.848625Z","end":"2025-11-01T10:36:18.992805Z","steps":["trace[1475105633] 'process raft request'  (duration: 129.406567ms)","trace[1475105633] 'compare'  (duration: 14.676321ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:36:20 up  2:18,  0 user,  load average: 5.05, 2.66, 1.63
	Linux pause-405879 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a6b9e58d85ab0c406893099cade3aa9de63e82cb5a62b8daad4a8fe01e95791] <==
	I1101 10:35:55.870488       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:35:55.870817       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 10:35:55.870976       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:35:55.870993       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:35:55.871017       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:35:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:35:55.976474       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:35:55.976565       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:35:55.976577       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:35:55.976712       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:35:56.276659       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:35:56.276685       1 metrics.go:72] Registering metrics
	I1101 10:35:56.276737       1 controller.go:711] "Syncing nftables rules"
	I1101 10:36:06.069744       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:36:06.069834       1 main.go:301] handling current node
	I1101 10:36:16.070585       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:36:16.070630       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08681f77a7ca0ad657ca55ebaeb0f2715715c3bb1e381d90f12bf0152c4b45ff] <==
	I1101 10:35:47.147585       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1101 10:35:47.150962       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:35:47.163346       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:35:47.165809       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:35:47.166069       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:35:47.173813       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:35:47.174036       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:35:47.194163       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:48.056925       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:35:48.064034       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:35:48.064121       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:35:48.621028       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:35:48.659593       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:35:48.758453       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:35:48.764163       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1101 10:35:48.765138       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:35:48.774569       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:35:49.108869       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:35:49.541269       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:35:49.551641       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:35:49.559670       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:35:54.962760       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:54.967444       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:55.111677       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:35:55.211471       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3a8ad7e0e76341d07adeca6ac580feca170a71427f035e1d93c352ee61452bd3] <==
	I1101 10:35:54.094945       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:35:54.095455       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-405879" podCIDRs=["10.244.0.0/24"]
	I1101 10:35:54.097240       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:35:54.102473       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:35:54.102524       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:35:54.107948       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:35:54.108117       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:35:54.108187       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:35:54.108690       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:35:54.108815       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:35:54.111733       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:35:54.113062       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:35:54.114200       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:35:54.115382       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:35:54.116250       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:35:54.120971       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:35:54.120989       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:35:54.120997       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:35:54.123885       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:35:54.124569       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:35:54.128245       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:35:54.129313       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:35:54.130522       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:35:54.137848       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:36:09.073408       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4e3ba8e1b76b5d8e7ea5eba9a2d45211f7050aba1cd11526c15684d155d31010] <==
	I1101 10:35:55.692930       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:35:55.761606       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:35:55.861998       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:35:55.862034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 10:35:55.862100       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:35:55.880701       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:35:55.880760       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:35:55.885739       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:35:55.886079       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:35:55.886117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:35:55.887256       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:35:55.887287       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:35:55.887367       1 config.go:309] "Starting node config controller"
	I1101 10:35:55.887382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:35:55.887450       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:35:55.887548       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:35:55.887742       1 config.go:200] "Starting service config controller"
	I1101 10:35:55.887769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:35:55.987528       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:35:55.987551       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:35:55.987889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:35:55.987904       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [92360e8c9a9bca5fe2511249daf7d5d53794fc72ba17f3dc32d065e87322e39c] <==
	E1101 10:35:47.213646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:35:47.213591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:35:47.213645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:35:47.213734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:35:47.213755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:35:47.213816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:35:47.213841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:35:47.213635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:35:47.213886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:35:47.213956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:35:47.213998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:35:48.049693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:35:48.060571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:35:48.070875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:35:48.127794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:35:48.150618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:35:48.159155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:35:48.193298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:35:48.223556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:35:48.251567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:35:48.345935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:35:48.346258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:35:48.402603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:35:48.402656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1101 10:35:48.806002       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:35:50 pause-405879 kubelet[1298]: E1101 10:35:50.703175    1298 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-405879\" already exists" pod="kube-system/kube-apiserver-pause-405879"
	Nov 01 10:35:50 pause-405879 kubelet[1298]: I1101 10:35:50.910643    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-405879" podStartSLOduration=1.910621772 podStartE2EDuration="1.910621772s" podCreationTimestamp="2025-11-01 10:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:50.702714298 +0000 UTC m=+1.356968513" watchObservedRunningTime="2025-11-01 10:35:50.910621772 +0000 UTC m=+1.564875979"
	Nov 01 10:35:51 pause-405879 kubelet[1298]: I1101 10:35:51.166869    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-405879" podStartSLOduration=2.166842899 podStartE2EDuration="2.166842899s" podCreationTimestamp="2025-11-01 10:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:50.91061598 +0000 UTC m=+1.564870192" watchObservedRunningTime="2025-11-01 10:35:51.166842899 +0000 UTC m=+1.821097123"
	Nov 01 10:35:51 pause-405879 kubelet[1298]: I1101 10:35:51.193110    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-405879" podStartSLOduration=2.193088326 podStartE2EDuration="2.193088326s" podCreationTimestamp="2025-11-01 10:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:51.167089165 +0000 UTC m=+1.821343376" watchObservedRunningTime="2025-11-01 10:35:51.193088326 +0000 UTC m=+1.847342539"
	Nov 01 10:35:51 pause-405879 kubelet[1298]: I1101 10:35:51.482990    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-405879" podStartSLOduration=3.482971485 podStartE2EDuration="3.482971485s" podCreationTimestamp="2025-11-01 10:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:51.193324397 +0000 UTC m=+1.847578610" watchObservedRunningTime="2025-11-01 10:35:51.482971485 +0000 UTC m=+2.137225697"
	Nov 01 10:35:54 pause-405879 kubelet[1298]: I1101 10:35:54.185266    1298 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:35:54 pause-405879 kubelet[1298]: I1101 10:35:54.186074    1298 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.267900    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cac0a2ea-c070-47e3-a046-54360b8d8c69-kube-proxy\") pod \"kube-proxy-2s44m\" (UID: \"cac0a2ea-c070-47e3-a046-54360b8d8c69\") " pod="kube-system/kube-proxy-2s44m"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.267944    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cac0a2ea-c070-47e3-a046-54360b8d8c69-xtables-lock\") pod \"kube-proxy-2s44m\" (UID: \"cac0a2ea-c070-47e3-a046-54360b8d8c69\") " pod="kube-system/kube-proxy-2s44m"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.267969    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cac0a2ea-c070-47e3-a046-54360b8d8c69-lib-modules\") pod \"kube-proxy-2s44m\" (UID: \"cac0a2ea-c070-47e3-a046-54360b8d8c69\") " pod="kube-system/kube-proxy-2s44m"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.267995    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdt2g\" (UniqueName: \"kubernetes.io/projected/cac0a2ea-c070-47e3-a046-54360b8d8c69-kube-api-access-bdt2g\") pod \"kube-proxy-2s44m\" (UID: \"cac0a2ea-c070-47e3-a046-54360b8d8c69\") " pod="kube-system/kube-proxy-2s44m"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.268049    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eaa61734-624e-40f5-a34d-ce82c6c226ae-cni-cfg\") pod \"kindnet-trqjm\" (UID: \"eaa61734-624e-40f5-a34d-ce82c6c226ae\") " pod="kube-system/kindnet-trqjm"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.268071    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eaa61734-624e-40f5-a34d-ce82c6c226ae-xtables-lock\") pod \"kindnet-trqjm\" (UID: \"eaa61734-624e-40f5-a34d-ce82c6c226ae\") " pod="kube-system/kindnet-trqjm"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.268094    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9x75\" (UniqueName: \"kubernetes.io/projected/eaa61734-624e-40f5-a34d-ce82c6c226ae-kube-api-access-c9x75\") pod \"kindnet-trqjm\" (UID: \"eaa61734-624e-40f5-a34d-ce82c6c226ae\") " pod="kube-system/kindnet-trqjm"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.268177    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eaa61734-624e-40f5-a34d-ce82c6c226ae-lib-modules\") pod \"kindnet-trqjm\" (UID: \"eaa61734-624e-40f5-a34d-ce82c6c226ae\") " pod="kube-system/kindnet-trqjm"
	Nov 01 10:35:56 pause-405879 kubelet[1298]: I1101 10:35:56.517711    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-trqjm" podStartSLOduration=1.5176879140000001 podStartE2EDuration="1.517687914s" podCreationTimestamp="2025-11-01 10:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:56.5055434 +0000 UTC m=+7.159797614" watchObservedRunningTime="2025-11-01 10:35:56.517687914 +0000 UTC m=+7.171942127"
	Nov 01 10:35:58 pause-405879 kubelet[1298]: I1101 10:35:58.375659    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2s44m" podStartSLOduration=3.3756377410000002 podStartE2EDuration="3.375637741s" podCreationTimestamp="2025-11-01 10:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:56.518200363 +0000 UTC m=+7.172454576" watchObservedRunningTime="2025-11-01 10:35:58.375637741 +0000 UTC m=+9.029891953"
	Nov 01 10:36:06 pause-405879 kubelet[1298]: I1101 10:36:06.344687    1298 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:36:06 pause-405879 kubelet[1298]: I1101 10:36:06.448185    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c86c63f4-74d0-46b2-b04c-725db887620d-config-volume\") pod \"coredns-66bc5c9577-qksbx\" (UID: \"c86c63f4-74d0-46b2-b04c-725db887620d\") " pod="kube-system/coredns-66bc5c9577-qksbx"
	Nov 01 10:36:06 pause-405879 kubelet[1298]: I1101 10:36:06.448249    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpczm\" (UniqueName: \"kubernetes.io/projected/c86c63f4-74d0-46b2-b04c-725db887620d-kube-api-access-qpczm\") pod \"coredns-66bc5c9577-qksbx\" (UID: \"c86c63f4-74d0-46b2-b04c-725db887620d\") " pod="kube-system/coredns-66bc5c9577-qksbx"
	Nov 01 10:36:07 pause-405879 kubelet[1298]: I1101 10:36:07.549824    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qksbx" podStartSLOduration=12.549799692 podStartE2EDuration="12.549799692s" podCreationTimestamp="2025-11-01 10:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:36:07.531376115 +0000 UTC m=+18.185630329" watchObservedRunningTime="2025-11-01 10:36:07.549799692 +0000 UTC m=+18.204053904"
	Nov 01 10:36:16 pause-405879 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:36:16 pause-405879 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:36:16 pause-405879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:36:16 pause-405879 systemd[1]: kubelet.service: Consumed 1.154s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-405879 -n pause-405879
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-405879 -n pause-405879: exit status 2 (389.178513ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-405879 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-405879
helpers_test.go:243: (dbg) docker inspect pause-405879:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15",
	        "Created": "2025-11-01T10:35:35.045461825Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253486,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:35:35.08859762Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15/hostname",
	        "HostsPath": "/var/lib/docker/containers/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15/hosts",
	        "LogPath": "/var/lib/docker/containers/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15/754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15-json.log",
	        "Name": "/pause-405879",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-405879:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-405879",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "754e54d0295d335704b744d3b69c0e65205a1c099585d434c1928cc0cb74bb15",
	                "LowerDir": "/var/lib/docker/overlay2/cd784938e042db5bb208f24b8074a7acb6bf96587a02e53de81b06ace65b5c8b-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd784938e042db5bb208f24b8074a7acb6bf96587a02e53de81b06ace65b5c8b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd784938e042db5bb208f24b8074a7acb6bf96587a02e53de81b06ace65b5c8b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd784938e042db5bb208f24b8074a7acb6bf96587a02e53de81b06ace65b5c8b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-405879",
	                "Source": "/var/lib/docker/volumes/pause-405879/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-405879",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-405879",
	                "name.minikube.sigs.k8s.io": "pause-405879",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "502016ba2a712cef2cc32a37f29eb61571c3bb2e4b1c9a06eda7f92bb42fe3d5",
	            "SandboxKey": "/var/run/docker/netns/502016ba2a71",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-405879": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:fe:47:5f:55:ed",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6b76904196f7a013a507cd799387e87773503efa226265ce4c1e178b68026d9",
	                    "EndpointID": "7f121aeda7f05cddb4058b62ac6da378daa9ec896a261cb07e1547d40c6fab8b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-405879",
	                        "754e54d0295d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-405879 -n pause-405879
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-405879 -n pause-405879: exit status 2 (329.641024ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-405879 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-299863 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cri-dockerd --version                                                                                 │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl cat containerd --no-pager                                                                   │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo cat /etc/containerd/config.toml                                                                       │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo containerd config dump                                                                                │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo systemctl cat crio --no-pager                                                                         │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ ssh     │ -p cilium-299863 sudo crio config                                                                                           │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ delete  │ -p cilium-299863                                                                                                            │ cilium-299863             │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:35 UTC │
	│ start   │ -p pause-405879 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-405879              │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ delete  │ -p missing-upgrade-834138                                                                                                   │ missing-upgrade-834138    │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:35 UTC │
	│ start   │ -p NoKubernetes-585638 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-585638       │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │                     │
	│ start   │ -p NoKubernetes-585638 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-585638       │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p running-upgrade-376123 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ running-upgrade-376123    │ jenkins │ v1.37.0 │ 01 Nov 25 10:35 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p NoKubernetes-585638 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-585638       │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ start   │ -p pause-405879 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-405879              │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ delete  │ -p running-upgrade-376123                                                                                                   │ running-upgrade-376123    │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │ 01 Nov 25 10:36 UTC │
	│ start   │ -p force-systemd-flag-841776 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-841776 │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	│ pause   │ -p pause-405879 --alsologtostderr -v=5                                                                                      │ pause-405879              │ jenkins │ v1.37.0 │ 01 Nov 25 10:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:36:14
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:36:14.884609  264742 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:36:14.884919  264742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:14.884930  264742 out.go:374] Setting ErrFile to fd 2...
	I1101 10:36:14.884934  264742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:36:14.885127  264742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:36:14.886033  264742 out.go:368] Setting JSON to false
	I1101 10:36:14.887417  264742 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8315,"bootTime":1761985060,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:36:14.887548  264742 start.go:143] virtualization: kvm guest
	I1101 10:36:14.889306  264742 out.go:179] * [force-systemd-flag-841776] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:36:14.890558  264742 notify.go:221] Checking for updates...
	I1101 10:36:14.890575  264742 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:36:14.891842  264742 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:36:14.893143  264742 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:36:14.894263  264742 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:36:14.895514  264742 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:36:14.896588  264742 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:36:14.897992  264742 config.go:182] Loaded profile config "NoKubernetes-585638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 10:36:14.898091  264742 config.go:182] Loaded profile config "kubernetes-upgrade-896514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:14.898192  264742 config.go:182] Loaded profile config "pause-405879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:36:14.898285  264742 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:36:14.922071  264742 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:36:14.922157  264742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:14.976925  264742 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:36:14.967785642 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:36:14.977047  264742 docker.go:319] overlay module found
	I1101 10:36:14.978698  264742 out.go:179] * Using the docker driver based on user configuration
	I1101 10:36:10.962625  237182 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.055950297s)
	W1101 10:36:10.962678  237182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1101 10:36:10.962687  237182 logs.go:123] Gathering logs for kube-apiserver [dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c] ...
	I1101 10:36:10.962702  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c"
	I1101 10:36:11.002217  237182 logs.go:123] Gathering logs for kube-apiserver [36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2] ...
	I1101 10:36:11.002262  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2"
	I1101 10:36:13.547404  237182 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:36:13.915559  237182 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:32996->192.168.85.2:8443: read: connection reset by peer
	I1101 10:36:13.915630  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:36:13.915691  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:36:13.948777  237182 cri.go:89] found id: "dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c"
	I1101 10:36:13.948797  237182 cri.go:89] found id: "36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2"
	I1101 10:36:13.948800  237182 cri.go:89] found id: ""
	I1101 10:36:13.948808  237182 logs.go:282] 2 containers: [dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c 36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2]
	I1101 10:36:13.948861  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:13.952959  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:13.956647  237182 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:36:13.956724  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:36:13.985668  237182 cri.go:89] found id: ""
	I1101 10:36:13.985696  237182 logs.go:282] 0 containers: []
	W1101 10:36:13.985707  237182 logs.go:284] No container was found matching "etcd"
	I1101 10:36:13.985715  237182 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:36:13.985775  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:36:14.014814  237182 cri.go:89] found id: ""
	I1101 10:36:14.014843  237182 logs.go:282] 0 containers: []
	W1101 10:36:14.014853  237182 logs.go:284] No container was found matching "coredns"
	I1101 10:36:14.014860  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:36:14.014916  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:36:14.044314  237182 cri.go:89] found id: "2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798"
	I1101 10:36:14.044334  237182 cri.go:89] found id: ""
	I1101 10:36:14.044342  237182 logs.go:282] 1 containers: [2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798]
	I1101 10:36:14.044421  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:14.048361  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:36:14.048421  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:36:14.080747  237182 cri.go:89] found id: ""
	I1101 10:36:14.080768  237182 logs.go:282] 0 containers: []
	W1101 10:36:14.080775  237182 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:36:14.080793  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:36:14.080856  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:36:14.110056  237182 cri.go:89] found id: "1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f"
	I1101 10:36:14.110081  237182 cri.go:89] found id: "4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0"
	I1101 10:36:14.110086  237182 cri.go:89] found id: ""
	I1101 10:36:14.110098  237182 logs.go:282] 2 containers: [1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f 4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0]
	I1101 10:36:14.110159  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:14.114385  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:14.118191  237182 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:36:14.118237  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:36:14.144514  237182 cri.go:89] found id: ""
	I1101 10:36:14.144542  237182 logs.go:282] 0 containers: []
	W1101 10:36:14.144553  237182 logs.go:284] No container was found matching "kindnet"
	I1101 10:36:14.144562  237182 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:36:14.144610  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:36:14.172730  237182 cri.go:89] found id: ""
	I1101 10:36:14.172755  237182 logs.go:282] 0 containers: []
	W1101 10:36:14.172765  237182 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:36:14.172784  237182 logs.go:123] Gathering logs for dmesg ...
	I1101 10:36:14.172798  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:36:14.189706  237182 logs.go:123] Gathering logs for kube-apiserver [dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c] ...
	I1101 10:36:14.189739  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c"
	I1101 10:36:14.227034  237182 logs.go:123] Gathering logs for kube-scheduler [2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798] ...
	I1101 10:36:14.227064  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798"
	I1101 10:36:14.279143  237182 logs.go:123] Gathering logs for container status ...
	I1101 10:36:14.279180  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:36:14.310995  237182 logs.go:123] Gathering logs for kubelet ...
	I1101 10:36:14.311031  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:36:14.377622  237182 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:36:14.377658  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:36:14.435819  237182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:36:14.435841  237182 logs.go:123] Gathering logs for kube-apiserver [36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2] ...
	I1101 10:36:14.435858  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 36993c40ba3ce532dfa039a8eaa02c6581b55012f5c6c3e866b4208c1f4e09c2"
	I1101 10:36:14.472004  237182 logs.go:123] Gathering logs for kube-controller-manager [1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f] ...
	I1101 10:36:14.472035  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f"
	I1101 10:36:14.500814  237182 logs.go:123] Gathering logs for kube-controller-manager [4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0] ...
	I1101 10:36:14.500841  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0"
	I1101 10:36:14.528080  237182 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:36:14.528132  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 10:36:14.979862  264742 start.go:309] selected driver: docker
	I1101 10:36:14.979874  264742 start.go:930] validating driver "docker" against <nil>
	I1101 10:36:14.979887  264742 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:36:14.980674  264742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:36:15.037369  264742 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:36:15.02814876 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:36:15.037595  264742 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:36:15.037811  264742 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 10:36:15.039421  264742 out.go:179] * Using Docker driver with root privileges
	I1101 10:36:15.040481  264742 cni.go:84] Creating CNI manager for ""
	I1101 10:36:15.040570  264742 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:36:15.040583  264742 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:36:15.040649  264742 start.go:353] cluster config:
	{Name:force-systemd-flag-841776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-841776 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:36:15.041960  264742 out.go:179] * Starting "force-systemd-flag-841776" primary control-plane node in "force-systemd-flag-841776" cluster
	I1101 10:36:15.043040  264742 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:36:15.044198  264742 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:36:15.045523  264742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:36:15.045554  264742 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:36:15.045570  264742 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:36:15.045583  264742 cache.go:59] Caching tarball of preloaded images
	I1101 10:36:15.045677  264742 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:36:15.045689  264742 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:36:15.045800  264742 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/force-systemd-flag-841776/config.json ...
	I1101 10:36:15.045825  264742 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/force-systemd-flag-841776/config.json: {Name:mk7449200b91430e8697351111f6425117f6a5dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:36:15.065881  264742 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:36:15.065904  264742 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:36:15.065919  264742 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:36:15.065945  264742 start.go:360] acquireMachinesLock for force-systemd-flag-841776: {Name:mk1ee397c6ff1e52820690aa6db5f10820ff5978 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:36:15.066043  264742 start.go:364] duration metric: took 81.522µs to acquireMachinesLock for "force-systemd-flag-841776"
	I1101 10:36:15.066067  264742 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-841776 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-841776 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:36:15.066133  264742 start.go:125] createHost starting for "" (driver="docker")
	I1101 10:36:15.065327  262749 pod_ready.go:94] pod "kube-proxy-2s44m" is "Ready"
	I1101 10:36:15.065352  262749 pod_ready.go:86] duration metric: took 400.799728ms for pod "kube-proxy-2s44m" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:36:15.266262  262749 pod_ready.go:83] waiting for pod "kube-scheduler-pause-405879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:36:15.663863  262749 pod_ready.go:94] pod "kube-scheduler-pause-405879" is "Ready"
	I1101 10:36:15.663891  262749 pod_ready.go:86] duration metric: took 397.601074ms for pod "kube-scheduler-pause-405879" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:36:15.663906  262749 pod_ready.go:40] duration metric: took 1.604203696s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:36:15.717681  262749 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:36:15.720442  262749 out.go:179] * Done! kubectl is now configured to use "pause-405879" cluster and "default" namespace by default
	I1101 10:36:15.067828  264742 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:36:15.068060  264742 start.go:159] libmachine.API.Create for "force-systemd-flag-841776" (driver="docker")
	I1101 10:36:15.068116  264742 client.go:173] LocalClient.Create starting
	I1101 10:36:15.068194  264742 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem
	I1101 10:36:15.068238  264742 main.go:143] libmachine: Decoding PEM data...
	I1101 10:36:15.068270  264742 main.go:143] libmachine: Parsing certificate...
	I1101 10:36:15.068370  264742 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem
	I1101 10:36:15.068415  264742 main.go:143] libmachine: Decoding PEM data...
	I1101 10:36:15.068432  264742 main.go:143] libmachine: Parsing certificate...
	I1101 10:36:15.068805  264742 cli_runner.go:164] Run: docker network inspect force-systemd-flag-841776 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:36:15.085632  264742 cli_runner.go:211] docker network inspect force-systemd-flag-841776 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:36:15.085729  264742 network_create.go:284] running [docker network inspect force-systemd-flag-841776] to gather additional debugging logs...
	I1101 10:36:15.085750  264742 cli_runner.go:164] Run: docker network inspect force-systemd-flag-841776
	W1101 10:36:15.102174  264742 cli_runner.go:211] docker network inspect force-systemd-flag-841776 returned with exit code 1
	I1101 10:36:15.102214  264742 network_create.go:287] error running [docker network inspect force-systemd-flag-841776]: docker network inspect force-systemd-flag-841776: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-841776 not found
	I1101 10:36:15.102229  264742 network_create.go:289] output of [docker network inspect force-systemd-flag-841776]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-841776 not found
	
	** /stderr **
	I1101 10:36:15.102346  264742 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:36:15.120316  264742 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ac7093b735a5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:19:58:44:be:58} reservation:<nil>}
	I1101 10:36:15.120839  264742 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2c03ebffc507 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:41:85:56:13:7f} reservation:<nil>}
	I1101 10:36:15.121349  264742 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abee7b1ad47f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:f3:16:31:10:75} reservation:<nil>}
	I1101 10:36:15.122034  264742 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-917dbaa70d76 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:ab:ea:85:30:56} reservation:<nil>}
	I1101 10:36:15.122661  264742 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a93816dd7643 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:56:1d:6c:55:f4:cb} reservation:<nil>}
	I1101 10:36:15.123471  264742 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ef8e50}
	I1101 10:36:15.123513  264742 network_create.go:124] attempt to create docker network force-systemd-flag-841776 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1101 10:36:15.123568  264742 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-841776 force-systemd-flag-841776
	I1101 10:36:15.178881  264742 network_create.go:108] docker network force-systemd-flag-841776 192.168.94.0/24 created
	I1101 10:36:15.178912  264742 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-841776" container
	I1101 10:36:15.178980  264742 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:36:15.195997  264742 cli_runner.go:164] Run: docker volume create force-systemd-flag-841776 --label name.minikube.sigs.k8s.io=force-systemd-flag-841776 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:36:15.213590  264742 oci.go:103] Successfully created a docker volume force-systemd-flag-841776
	I1101 10:36:15.213736  264742 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-841776-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-841776 --entrypoint /usr/bin/test -v force-systemd-flag-841776:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:36:15.592798  264742 oci.go:107] Successfully prepared a docker volume force-systemd-flag-841776
	I1101 10:36:15.592836  264742 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:36:15.592858  264742 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:36:15.592919  264742 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-841776:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:36:17.072262  237182 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:36:17.072809  237182 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1101 10:36:17.072884  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 10:36:17.072939  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 10:36:17.099625  237182 cri.go:89] found id: "dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c"
	I1101 10:36:17.099649  237182 cri.go:89] found id: ""
	I1101 10:36:17.099659  237182 logs.go:282] 1 containers: [dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c]
	I1101 10:36:17.099731  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:17.103879  237182 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 10:36:17.103939  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 10:36:17.133660  237182 cri.go:89] found id: ""
	I1101 10:36:17.133688  237182 logs.go:282] 0 containers: []
	W1101 10:36:17.133699  237182 logs.go:284] No container was found matching "etcd"
	I1101 10:36:17.133707  237182 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 10:36:17.133767  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 10:36:17.163533  237182 cri.go:89] found id: ""
	I1101 10:36:17.163560  237182 logs.go:282] 0 containers: []
	W1101 10:36:17.163569  237182 logs.go:284] No container was found matching "coredns"
	I1101 10:36:17.163577  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 10:36:17.163642  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 10:36:17.202399  237182 cri.go:89] found id: "2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798"
	I1101 10:36:17.202423  237182 cri.go:89] found id: ""
	I1101 10:36:17.202434  237182 logs.go:282] 1 containers: [2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798]
	I1101 10:36:17.202504  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:17.206740  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 10:36:17.206812  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 10:36:17.235056  237182 cri.go:89] found id: ""
	I1101 10:36:17.235086  237182 logs.go:282] 0 containers: []
	W1101 10:36:17.235098  237182 logs.go:284] No container was found matching "kube-proxy"
	I1101 10:36:17.235105  237182 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 10:36:17.235165  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 10:36:17.269309  237182 cri.go:89] found id: "1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f"
	I1101 10:36:17.269336  237182 cri.go:89] found id: "4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0"
	I1101 10:36:17.269341  237182 cri.go:89] found id: ""
	I1101 10:36:17.269351  237182 logs.go:282] 2 containers: [1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f 4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0]
	I1101 10:36:17.269418  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:17.273926  237182 ssh_runner.go:195] Run: which crictl
	I1101 10:36:17.277808  237182 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 10:36:17.277866  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 10:36:17.306035  237182 cri.go:89] found id: ""
	I1101 10:36:17.306075  237182 logs.go:282] 0 containers: []
	W1101 10:36:17.306090  237182 logs.go:284] No container was found matching "kindnet"
	I1101 10:36:17.306099  237182 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 10:36:17.306161  237182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 10:36:17.338646  237182 cri.go:89] found id: ""
	I1101 10:36:17.338675  237182 logs.go:282] 0 containers: []
	W1101 10:36:17.338704  237182 logs.go:284] No container was found matching "storage-provisioner"
	I1101 10:36:17.338731  237182 logs.go:123] Gathering logs for kubelet ...
	I1101 10:36:17.338752  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 10:36:17.413081  237182 logs.go:123] Gathering logs for dmesg ...
	I1101 10:36:17.413124  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 10:36:17.430187  237182 logs.go:123] Gathering logs for kube-scheduler [2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798] ...
	I1101 10:36:17.430218  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2b649fc79df1e1125652e447be238ee4f0ea051389f69d7e64de077220268798"
	I1101 10:36:17.475637  237182 logs.go:123] Gathering logs for kube-controller-manager [1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f] ...
	I1101 10:36:17.475667  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1d262692167e3747a621e8e27e09de7c18a1e69a281b7fa491fc0231fb8a967f"
	I1101 10:36:17.503706  237182 logs.go:123] Gathering logs for kube-controller-manager [4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0] ...
	I1101 10:36:17.503747  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ab68c0fbf05433ac7361883f7f80ca943cad4a6dd191f26592ced492039eab0"
	I1101 10:36:17.532342  237182 logs.go:123] Gathering logs for container status ...
	I1101 10:36:17.532380  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 10:36:17.562808  237182 logs.go:123] Gathering logs for describe nodes ...
	I1101 10:36:17.562841  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 10:36:17.619118  237182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 10:36:17.619143  237182 logs.go:123] Gathering logs for kube-apiserver [dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c] ...
	I1101 10:36:17.619161  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dea7925583b6c799ffbab7f1a082934307e6164afe534ad3170fa2bbc3e15c3c"
	I1101 10:36:17.652005  237182 logs.go:123] Gathering logs for CRI-O ...
	I1101 10:36:17.652036  237182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.697139927Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698056614Z" level=info msg="Conmon does support the --sync option"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698072807Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698086727Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698844395Z" level=info msg="Conmon does support the --sync option"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.698863973Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.70283858Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.702860312Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.70334655Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hook
s.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_m
appings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.703830639Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.703902232Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.709776559Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.754077228Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-qksbx Namespace:kube-system ID:1bb50a34203cb1ebf1175e4b010b6658a3de28a37f1f41e30553b2a37349e54f UID:c86c63f4-74d0-46b2-b04c-725db887620d NetNS:/var/run/netns/1e9bdf9d-8326-4c27-a1be-e12ce72b403f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012c4b8}] Aliases:map[]}"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.754411538Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-qksbx for CNI network kindnet (type=ptp)"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755350989Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755380756Z" level=info msg="Starting seccomp notifier watcher"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755433441Z" level=info msg="Create NRI interface"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755554867Z" level=info msg="built-in NRI default validator is disabled"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.75556574Z" level=info msg="runtime interface created"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755578394Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755584584Z" level=info msg="runtime interface starting up..."
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755590047Z" level=info msg="starting plugins..."
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755600409Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 01 10:36:12 pause-405879 crio[2136]: time="2025-11-01T10:36:12.755955363Z" level=info msg="No systemd watchdog enabled"
	Nov 01 10:36:12 pause-405879 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	154497e458430       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   1bb50a34203cb       coredns-66bc5c9577-qksbx               kube-system
	4e3ba8e1b76b5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   26 seconds ago      Running             kube-proxy                0                   aefadb651706c       kube-proxy-2s44m                       kube-system
	0a6b9e58d85ab       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   a9b62fc12cadd       kindnet-trqjm                          kube-system
	0011544ba1ef7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago      Running             etcd                      0                   2e592da1f15d9       etcd-pause-405879                      kube-system
	3a8ad7e0e7634       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   fa9df57d7f374       kube-controller-manager-pause-405879   kube-system
	92360e8c9a9bc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   52fab05af124c       kube-scheduler-pause-405879            kube-system
	08681f77a7ca0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   47ebaa8d7dfe0       kube-apiserver-pause-405879            kube-system
	
	
	==> coredns [154497e4584307753fbc1163ec426cf44c3aaf91ffece1764468680d374d336d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37294 - 26462 "HINFO IN 1829789769323189349.4354050200118358698. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032372635s
	
	
	==> describe nodes <==
	Name:               pause-405879
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-405879
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=pause-405879
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_35_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:35:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-405879
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:36:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:36:06 +0000   Sat, 01 Nov 2025 10:35:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:36:06 +0000   Sat, 01 Nov 2025 10:35:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:36:06 +0000   Sat, 01 Nov 2025 10:35:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:36:06 +0000   Sat, 01 Nov 2025 10:36:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-405879
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                85cefb4f-0cb0-45ec-95d1-f9d37de67ff4
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-qksbx                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-405879                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-trqjm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-405879             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-405879    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-2s44m                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-405879             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node pause-405879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node pause-405879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node pause-405879 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node pause-405879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node pause-405879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node pause-405879 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node pause-405879 event: Registered Node pause-405879 in Controller
	  Normal  NodeReady                16s                kubelet          Node pause-405879 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 1 09:57] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.028293] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023905] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023938] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023872] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +1.023934] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +2.047845] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[Nov 1 09:58] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[  +8.191344] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[ +16.382718] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	[ +32.253574] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: d6 97 b6 52 3b a0 16 9f 7e 0a 54 c7 08 00
	
	
	==> etcd [0011544ba1ef753d375b43daa1e4a6000f7f2f19fbf46baa1f844bfc4c49d3d6] <==
	{"level":"warn","ts":"2025-11-01T10:35:50.700090Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.996573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-01T10:35:50.700449Z","caller":"traceutil/trace.go:172","msg":"trace[1557822150] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:298; }","duration":"108.353737ms","start":"2025-11-01T10:35:50.592080Z","end":"2025-11-01T10:35:50.700434Z","steps":["trace[1557822150] 'agreement among raft nodes before linearized reading'  (duration: 107.932085ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:50.908803Z","caller":"traceutil/trace.go:172","msg":"trace[1541339381] linearizableReadLoop","detail":"{readStateIndex:309; appliedIndex:309; }","duration":"192.997047ms","start":"2025-11-01T10:35:50.715784Z","end":"2025-11-01T10:35:50.908781Z","steps":["trace[1541339381] 'read index received'  (duration: 192.98929ms)","trace[1541339381] 'applied index is now lower than readState.Index'  (duration: 6.193µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:35:50.908928Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.13029ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/job-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:35:50.908960Z","caller":"traceutil/trace.go:172","msg":"trace[89969182] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/job-controller; range_end:; response_count:0; response_revision:298; }","duration":"193.179223ms","start":"2025-11-01T10:35:50.715770Z","end":"2025-11-01T10:35:50.908949Z","steps":["trace[89969182] 'agreement among raft nodes before linearized reading'  (duration: 193.088872ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:50.909011Z","caller":"traceutil/trace.go:172","msg":"trace[1610482022] transaction","detail":"{read_only:false; response_revision:299; number_of_response:1; }","duration":"199.909457ms","start":"2025-11-01T10:35:50.709088Z","end":"2025-11-01T10:35:50.908997Z","steps":["trace[1610482022] 'process raft request'  (duration: 199.773037ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:35:51.165339Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.898693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/kindnet\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:35:51.165411Z","caller":"traceutil/trace.go:172","msg":"trace[1816080095] range","detail":"{range_begin:/registry/clusterroles/kindnet; range_end:; response_count:0; response_revision:299; }","duration":"166.972492ms","start":"2025-11-01T10:35:50.998415Z","end":"2025-11-01T10:35:51.165388Z","steps":["trace[1816080095] 'agreement among raft nodes before linearized reading'  (duration: 37.984659ms)","trace[1816080095] 'range keys from in-memory index tree'  (duration: 128.875395ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:35:51.165478Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.976301ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789722411185083 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/job-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/job-controller\" value_size:119 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:35:51.165661Z","caller":"traceutil/trace.go:172","msg":"trace[1570133166] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"251.623091ms","start":"2025-11-01T10:35:50.914023Z","end":"2025-11-01T10:35:51.165646Z","steps":["trace[1570133166] 'process raft request'  (duration: 122.429953ms)","trace[1570133166] 'compare'  (duration: 128.86411ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:35:51.165665Z","caller":"traceutil/trace.go:172","msg":"trace[1948869252] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"247.892911ms","start":"2025-11-01T10:35:50.917762Z","end":"2025-11-01T10:35:51.165655Z","steps":["trace[1948869252] 'process raft request'  (duration: 247.811843ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:35:51.480887Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.2514ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789722411185100 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:35:51.481081Z","caller":"traceutil/trace.go:172","msg":"trace[1792339761] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"276.539161ms","start":"2025-11-01T10:35:51.204521Z","end":"2025-11-01T10:35:51.481060Z","steps":["trace[1792339761] 'process raft request'  (duration: 126.062487ms)","trace[1792339761] 'compare'  (duration: 150.144217ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:35:51.481160Z","caller":"traceutil/trace.go:172","msg":"trace[776775587] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"273.777384ms","start":"2025-11-01T10:35:51.207372Z","end":"2025-11-01T10:35:51.481149Z","steps":["trace[776775587] 'process raft request'  (duration: 273.724763ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.481373Z","caller":"traceutil/trace.go:172","msg":"trace[42816039] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"276.583673ms","start":"2025-11-01T10:35:51.204779Z","end":"2025-11-01T10:35:51.481363Z","steps":["trace[42816039] 'process raft request'  (duration: 276.219847ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.592425Z","caller":"traceutil/trace.go:172","msg":"trace[1913520560] transaction","detail":"{read_only:false; response_revision:309; number_of_response:1; }","duration":"104.334919ms","start":"2025-11-01T10:35:51.488070Z","end":"2025-11-01T10:35:51.592405Z","steps":["trace[1913520560] 'process raft request'  (duration: 96.393926ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.594960Z","caller":"traceutil/trace.go:172","msg":"trace[158421973] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"104.327293ms","start":"2025-11-01T10:35:51.490620Z","end":"2025-11-01T10:35:51.594947Z","steps":["trace[158421973] 'process raft request'  (duration: 104.282948ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.594990Z","caller":"traceutil/trace.go:172","msg":"trace[1030127544] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"104.961879ms","start":"2025-11-01T10:35:51.490017Z","end":"2025-11-01T10:35:51.594979Z","steps":["trace[1030127544] 'process raft request'  (duration: 104.789428ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:35:51.865156Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.768573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-405879\" limit:1 ","response":"range_response_count:1 size:4812"}
	{"level":"info","ts":"2025-11-01T10:35:51.865217Z","caller":"traceutil/trace.go:172","msg":"trace[969886107] range","detail":"{range_begin:/registry/minions/pause-405879; range_end:; response_count:1; response_revision:312; }","duration":"197.840038ms","start":"2025-11-01T10:35:51.667362Z","end":"2025-11-01T10:35:51.865202Z","steps":["trace[969886107] 'agreement among raft nodes before linearized reading'  (duration: 82.252166ms)","trace[969886107] 'range keys from in-memory index tree'  (duration: 115.412116ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:35:51.865542Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.505636ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789722411185115 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-405879\" mod_revision:301 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-405879\" value_size:7813 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-405879\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:35:51.865688Z","caller":"traceutil/trace.go:172","msg":"trace[125784908] transaction","detail":"{read_only:false; response_revision:313; number_of_response:1; }","duration":"260.915083ms","start":"2025-11-01T10:35:51.604755Z","end":"2025-11-01T10:35:51.865670Z","steps":["trace[125784908] 'process raft request'  (duration: 144.908764ms)","trace[125784908] 'compare'  (duration: 115.388583ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:35:51.865710Z","caller":"traceutil/trace.go:172","msg":"trace[1875747674] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"200.538874ms","start":"2025-11-01T10:35:51.665161Z","end":"2025-11-01T10:35:51.865700Z","steps":["trace[1875747674] 'process raft request'  (duration: 200.504598ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:35:51.865751Z","caller":"traceutil/trace.go:172","msg":"trace[1856103689] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"259.036254ms","start":"2025-11-01T10:35:51.606700Z","end":"2025-11-01T10:35:51.865737Z","steps":["trace[1856103689] 'process raft request'  (duration: 258.904907ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:36:18.992824Z","caller":"traceutil/trace.go:172","msg":"trace[1475105633] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"144.179995ms","start":"2025-11-01T10:36:18.848625Z","end":"2025-11-01T10:36:18.992805Z","steps":["trace[1475105633] 'process raft request'  (duration: 129.406567ms)","trace[1475105633] 'compare'  (duration: 14.676321ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:36:22 up  2:18,  0 user,  load average: 5.05, 2.66, 1.63
	Linux pause-405879 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a6b9e58d85ab0c406893099cade3aa9de63e82cb5a62b8daad4a8fe01e95791] <==
	I1101 10:35:55.870488       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:35:55.870817       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 10:35:55.870976       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:35:55.870993       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:35:55.871017       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:35:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:35:55.976474       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:35:55.976565       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:35:55.976577       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:35:55.976712       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:35:56.276659       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:35:56.276685       1 metrics.go:72] Registering metrics
	I1101 10:35:56.276737       1 controller.go:711] "Syncing nftables rules"
	I1101 10:36:06.069744       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:36:06.069834       1 main.go:301] handling current node
	I1101 10:36:16.070585       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:36:16.070630       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08681f77a7ca0ad657ca55ebaeb0f2715715c3bb1e381d90f12bf0152c4b45ff] <==
	I1101 10:35:47.147585       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1101 10:35:47.150962       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:35:47.163346       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:35:47.165809       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:35:47.166069       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:35:47.173813       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:35:47.174036       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:35:47.194163       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:48.056925       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:35:48.064034       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:35:48.064121       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:35:48.621028       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:35:48.659593       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:35:48.758453       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:35:48.764163       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1101 10:35:48.765138       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:35:48.774569       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:35:49.108869       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:35:49.541269       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:35:49.551641       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:35:49.559670       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:35:54.962760       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:54.967444       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:35:55.111677       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:35:55.211471       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3a8ad7e0e76341d07adeca6ac580feca170a71427f035e1d93c352ee61452bd3] <==
	I1101 10:35:54.094945       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:35:54.095455       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-405879" podCIDRs=["10.244.0.0/24"]
	I1101 10:35:54.097240       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:35:54.102473       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:35:54.102524       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:35:54.107948       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:35:54.108117       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:35:54.108187       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:35:54.108690       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:35:54.108815       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:35:54.111733       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:35:54.113062       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:35:54.114200       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:35:54.115382       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:35:54.116250       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:35:54.120971       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:35:54.120989       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:35:54.120997       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:35:54.123885       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:35:54.124569       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:35:54.128245       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:35:54.129313       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:35:54.130522       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:35:54.137848       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:36:09.073408       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4e3ba8e1b76b5d8e7ea5eba9a2d45211f7050aba1cd11526c15684d155d31010] <==
	I1101 10:35:55.692930       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:35:55.761606       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:35:55.861998       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:35:55.862034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 10:35:55.862100       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:35:55.880701       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:35:55.880760       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:35:55.885739       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:35:55.886079       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:35:55.886117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:35:55.887256       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:35:55.887287       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:35:55.887367       1 config.go:309] "Starting node config controller"
	I1101 10:35:55.887382       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:35:55.887450       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:35:55.887548       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:35:55.887742       1 config.go:200] "Starting service config controller"
	I1101 10:35:55.887769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:35:55.987528       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:35:55.987551       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:35:55.987889       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:35:55.987904       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [92360e8c9a9bca5fe2511249daf7d5d53794fc72ba17f3dc32d065e87322e39c] <==
	E1101 10:35:47.213646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:35:47.213591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:35:47.213645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:35:47.213734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:35:47.213755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:35:47.213816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:35:47.213841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:35:47.213635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:35:47.213886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:35:47.213956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:35:47.213998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:35:48.049693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:35:48.060571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:35:48.070875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:35:48.127794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:35:48.150618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:35:48.159155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:35:48.193298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:35:48.223556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:35:48.251567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:35:48.345935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:35:48.346258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:35:48.402603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:35:48.402656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1101 10:35:48.806002       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:35:50 pause-405879 kubelet[1298]: E1101 10:35:50.703175    1298 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-405879\" already exists" pod="kube-system/kube-apiserver-pause-405879"
	Nov 01 10:35:50 pause-405879 kubelet[1298]: I1101 10:35:50.910643    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-405879" podStartSLOduration=1.910621772 podStartE2EDuration="1.910621772s" podCreationTimestamp="2025-11-01 10:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:50.702714298 +0000 UTC m=+1.356968513" watchObservedRunningTime="2025-11-01 10:35:50.910621772 +0000 UTC m=+1.564875979"
	Nov 01 10:35:51 pause-405879 kubelet[1298]: I1101 10:35:51.166869    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-405879" podStartSLOduration=2.166842899 podStartE2EDuration="2.166842899s" podCreationTimestamp="2025-11-01 10:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:50.91061598 +0000 UTC m=+1.564870192" watchObservedRunningTime="2025-11-01 10:35:51.166842899 +0000 UTC m=+1.821097123"
	Nov 01 10:35:51 pause-405879 kubelet[1298]: I1101 10:35:51.193110    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-405879" podStartSLOduration=2.193088326 podStartE2EDuration="2.193088326s" podCreationTimestamp="2025-11-01 10:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:51.167089165 +0000 UTC m=+1.821343376" watchObservedRunningTime="2025-11-01 10:35:51.193088326 +0000 UTC m=+1.847342539"
	Nov 01 10:35:51 pause-405879 kubelet[1298]: I1101 10:35:51.482990    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-405879" podStartSLOduration=3.482971485 podStartE2EDuration="3.482971485s" podCreationTimestamp="2025-11-01 10:35:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:51.193324397 +0000 UTC m=+1.847578610" watchObservedRunningTime="2025-11-01 10:35:51.482971485 +0000 UTC m=+2.137225697"
	Nov 01 10:35:54 pause-405879 kubelet[1298]: I1101 10:35:54.185266    1298 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:35:54 pause-405879 kubelet[1298]: I1101 10:35:54.186074    1298 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.267900    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cac0a2ea-c070-47e3-a046-54360b8d8c69-kube-proxy\") pod \"kube-proxy-2s44m\" (UID: \"cac0a2ea-c070-47e3-a046-54360b8d8c69\") " pod="kube-system/kube-proxy-2s44m"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.267944    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cac0a2ea-c070-47e3-a046-54360b8d8c69-xtables-lock\") pod \"kube-proxy-2s44m\" (UID: \"cac0a2ea-c070-47e3-a046-54360b8d8c69\") " pod="kube-system/kube-proxy-2s44m"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.267969    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cac0a2ea-c070-47e3-a046-54360b8d8c69-lib-modules\") pod \"kube-proxy-2s44m\" (UID: \"cac0a2ea-c070-47e3-a046-54360b8d8c69\") " pod="kube-system/kube-proxy-2s44m"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.267995    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdt2g\" (UniqueName: \"kubernetes.io/projected/cac0a2ea-c070-47e3-a046-54360b8d8c69-kube-api-access-bdt2g\") pod \"kube-proxy-2s44m\" (UID: \"cac0a2ea-c070-47e3-a046-54360b8d8c69\") " pod="kube-system/kube-proxy-2s44m"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.268049    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eaa61734-624e-40f5-a34d-ce82c6c226ae-cni-cfg\") pod \"kindnet-trqjm\" (UID: \"eaa61734-624e-40f5-a34d-ce82c6c226ae\") " pod="kube-system/kindnet-trqjm"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.268071    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eaa61734-624e-40f5-a34d-ce82c6c226ae-xtables-lock\") pod \"kindnet-trqjm\" (UID: \"eaa61734-624e-40f5-a34d-ce82c6c226ae\") " pod="kube-system/kindnet-trqjm"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.268094    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9x75\" (UniqueName: \"kubernetes.io/projected/eaa61734-624e-40f5-a34d-ce82c6c226ae-kube-api-access-c9x75\") pod \"kindnet-trqjm\" (UID: \"eaa61734-624e-40f5-a34d-ce82c6c226ae\") " pod="kube-system/kindnet-trqjm"
	Nov 01 10:35:55 pause-405879 kubelet[1298]: I1101 10:35:55.268177    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eaa61734-624e-40f5-a34d-ce82c6c226ae-lib-modules\") pod \"kindnet-trqjm\" (UID: \"eaa61734-624e-40f5-a34d-ce82c6c226ae\") " pod="kube-system/kindnet-trqjm"
	Nov 01 10:35:56 pause-405879 kubelet[1298]: I1101 10:35:56.517711    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-trqjm" podStartSLOduration=1.5176879140000001 podStartE2EDuration="1.517687914s" podCreationTimestamp="2025-11-01 10:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:56.5055434 +0000 UTC m=+7.159797614" watchObservedRunningTime="2025-11-01 10:35:56.517687914 +0000 UTC m=+7.171942127"
	Nov 01 10:35:58 pause-405879 kubelet[1298]: I1101 10:35:58.375659    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2s44m" podStartSLOduration=3.3756377410000002 podStartE2EDuration="3.375637741s" podCreationTimestamp="2025-11-01 10:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:35:56.518200363 +0000 UTC m=+7.172454576" watchObservedRunningTime="2025-11-01 10:35:58.375637741 +0000 UTC m=+9.029891953"
	Nov 01 10:36:06 pause-405879 kubelet[1298]: I1101 10:36:06.344687    1298 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:36:06 pause-405879 kubelet[1298]: I1101 10:36:06.448185    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c86c63f4-74d0-46b2-b04c-725db887620d-config-volume\") pod \"coredns-66bc5c9577-qksbx\" (UID: \"c86c63f4-74d0-46b2-b04c-725db887620d\") " pod="kube-system/coredns-66bc5c9577-qksbx"
	Nov 01 10:36:06 pause-405879 kubelet[1298]: I1101 10:36:06.448249    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpczm\" (UniqueName: \"kubernetes.io/projected/c86c63f4-74d0-46b2-b04c-725db887620d-kube-api-access-qpczm\") pod \"coredns-66bc5c9577-qksbx\" (UID: \"c86c63f4-74d0-46b2-b04c-725db887620d\") " pod="kube-system/coredns-66bc5c9577-qksbx"
	Nov 01 10:36:07 pause-405879 kubelet[1298]: I1101 10:36:07.549824    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qksbx" podStartSLOduration=12.549799692 podStartE2EDuration="12.549799692s" podCreationTimestamp="2025-11-01 10:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:36:07.531376115 +0000 UTC m=+18.185630329" watchObservedRunningTime="2025-11-01 10:36:07.549799692 +0000 UTC m=+18.204053904"
	Nov 01 10:36:16 pause-405879 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:36:16 pause-405879 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:36:16 pause-405879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:36:16 pause-405879 systemd[1]: kubelet.service: Consumed 1.154s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-405879 -n pause-405879
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-405879 -n pause-405879: exit status 2 (347.102848ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-405879 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-707467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-707467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (295.278726ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:41:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-707467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-707467 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-707467 describe deploy/metrics-server -n kube-system: exit status 1 (70.101029ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-707467 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-707467
helpers_test.go:243: (dbg) docker inspect old-k8s-version-707467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f",
	        "Created": "2025-11-01T10:40:17.695472964Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 333093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:40:17.753703121Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f-json.log",
	        "Name": "/old-k8s-version-707467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-707467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-707467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f",
	                "LowerDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-707467",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-707467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-707467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-707467",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-707467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51a31173649632e5fc344e72b897f1f4ba1f1f4f7a062716e30608137f7a51ee",
	            "SandboxKey": "/var/run/docker/netns/51a311736496",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-707467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:4c:27:06:d4:9a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "415a138baf910ff492e7f96276b65b02f48a203fb2684ca5f89bd5de7de466d7",
	                    "EndpointID": "3f4465d1209132c38b2591bfae04d376f3447529810bd74b3dc6e45179128a51",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-707467",
	                        "1c1720e1071c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-707467 -n old-k8s-version-707467
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-707467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-707467 logs -n 25: (1.119259849s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-299863 sudo docker system info                                                                                                              │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │                     │
	│ ssh     │ -p kindnet-299863 sudo systemctl status cri-docker --all --full --no-pager                                                                             │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │                     │
	│ ssh     │ -p kindnet-299863 sudo systemctl cat cri-docker --no-pager                                                                                             │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p kindnet-299863 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                        │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │                     │
	│ ssh     │ -p kindnet-299863 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                  │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p kindnet-299863 sudo cri-dockerd --version                                                                                                           │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p kindnet-299863 sudo systemctl status containerd --all --full --no-pager                                                                             │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │                     │
	│ ssh     │ -p kindnet-299863 sudo systemctl cat containerd --no-pager                                                                                             │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p kindnet-299863 sudo cat /lib/systemd/system/containerd.service                                                                                      │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p kindnet-299863 sudo cat /etc/containerd/config.toml                                                                                                 │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p custom-flannel-299863 pgrep -a kubelet                                                                                                              │ custom-flannel-299863  │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p kindnet-299863 sudo systemctl status crio --all --full --no-pager                                                                                   │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p kindnet-299863 sudo systemctl cat crio --no-pager                                                                                                   │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p kindnet-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ ssh     │ -p kindnet-299863 sudo crio config                                                                                                                     │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:40 UTC │
	│ delete  │ -p kindnet-299863                                                                                                                                      │ kindnet-299863         │ jenkins │ v1.37.0 │ 01 Nov 25 10:40 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-071527     │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/nsswitch.conf                                                                                                   │ custom-flannel-299863  │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/hosts                                                                                                           │ custom-flannel-299863  │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/resolv.conf                                                                                                     │ custom-flannel-299863  │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo crictl pods                                                                                                              │ custom-flannel-299863  │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo crictl ps --all                                                                                                          │ custom-flannel-299863  │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-707467 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-707467 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                   │ custom-flannel-299863  │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo ip a s                                                                                                                   │ custom-flannel-299863  │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:41:02
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:41:02.669949  349346 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:41:02.670234  349346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:41:02.670246  349346 out.go:374] Setting ErrFile to fd 2...
	I1101 10:41:02.670253  349346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:41:02.670548  349346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:41:02.671233  349346 out.go:368] Setting JSON to false
	I1101 10:41:02.672966  349346 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8603,"bootTime":1761985060,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:41:02.673085  349346 start.go:143] virtualization: kvm guest
	I1101 10:41:02.675315  349346 out.go:179] * [embed-certs-071527] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:41:02.678662  349346 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:41:02.678719  349346 notify.go:221] Checking for updates...
	I1101 10:41:02.681106  349346 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:41:02.682460  349346 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:41:02.683694  349346 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:41:02.684796  349346 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:41:02.689031  349346 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:41:02.690925  349346 config.go:182] Loaded profile config "custom-flannel-299863": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:02.691091  349346 config.go:182] Loaded profile config "no-preload-753486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:02.691218  349346 config.go:182] Loaded profile config "old-k8s-version-707467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:41:02.691362  349346 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:41:02.720203  349346 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:41:02.720316  349346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:41:02.793762  349346 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-01 10:41:02.779006755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:41:02.793876  349346 docker.go:319] overlay module found
	I1101 10:41:02.795808  349346 out.go:179] * Using the docker driver based on user configuration
	I1101 10:41:02.797000  349346 start.go:309] selected driver: docker
	I1101 10:41:02.797016  349346 start.go:930] validating driver "docker" against <nil>
	I1101 10:41:02.797028  349346 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:41:02.797629  349346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:41:02.862821  349346 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-01 10:41:02.852953882 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:41:02.862986  349346 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:41:02.863228  349346 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:41:02.864855  349346 out.go:179] * Using Docker driver with root privileges
	I1101 10:41:02.865958  349346 cni.go:84] Creating CNI manager for ""
	I1101 10:41:02.866052  349346 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:41:02.866068  349346 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:41:02.866153  349346 start.go:353] cluster config:
	{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:02.867454  349346 out.go:179] * Starting "embed-certs-071527" primary control-plane node in "embed-certs-071527" cluster
	I1101 10:41:02.868525  349346 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:41:02.869571  349346 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:41:02.870554  349346 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:41:02.870611  349346 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:41:02.870632  349346 cache.go:59] Caching tarball of preloaded images
	I1101 10:41:02.870643  349346 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:41:02.870738  349346 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:41:02.870755  349346 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:41:02.870893  349346 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:41:02.870928  349346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json: {Name:mkdd2b9a15b837c548d8c052b66c5679b1ef148b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:02.895660  349346 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:41:02.895684  349346 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:41:02.895705  349346 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:41:02.895736  349346 start.go:360] acquireMachinesLock for embed-certs-071527: {Name:mk6e96a90f486564e010d9ea6bfd4c480f872098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:41:02.895855  349346 start.go:364] duration metric: took 98.555µs to acquireMachinesLock for "embed-certs-071527"
	I1101 10:41:02.895881  349346 start.go:93] Provisioning new machine with config: &{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:41:02.895964  349346 start.go:125] createHost starting for "" (driver="docker")
	W1101 10:40:58.115683  331938 node_ready.go:57] node "old-k8s-version-707467" has "Ready":"False" status (will retry)
	W1101 10:41:00.116165  331938 node_ready.go:57] node "old-k8s-version-707467" has "Ready":"False" status (will retry)
	I1101 10:41:00.615573  331938 node_ready.go:49] node "old-k8s-version-707467" is "Ready"
	I1101 10:41:00.615609  331938 node_ready.go:38] duration metric: took 13.50318641s for node "old-k8s-version-707467" to be "Ready" ...
	I1101 10:41:00.615622  331938 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:41:00.615684  331938 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:41:00.627815  331938 api_server.go:72] duration metric: took 14.007523673s to wait for apiserver process to appear ...
	I1101 10:41:00.627841  331938 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:41:00.627859  331938 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:41:00.632319  331938 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 10:41:00.633553  331938 api_server.go:141] control plane version: v1.28.0
	I1101 10:41:00.633581  331938 api_server.go:131] duration metric: took 5.732649ms to wait for apiserver health ...
	I1101 10:41:00.633592  331938 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:41:00.637728  331938 system_pods.go:59] 8 kube-system pods found
	I1101 10:41:00.637767  331938 system_pods.go:61] "coredns-5dd5756b68-9fdk6" [e43bd16e-e22d-4c91-88ec-652fe391b4f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:00.637774  331938 system_pods.go:61] "etcd-old-k8s-version-707467" [ef1fa7c6-d526-427b-bd21-26b22c575da3] Running
	I1101 10:41:00.637778  331938 system_pods.go:61] "kindnet-xxlgz" [cf757ff2-e0ef-43e8-97e9-44b145900bf5] Running
	I1101 10:41:00.637782  331938 system_pods.go:61] "kube-apiserver-old-k8s-version-707467" [fe63902d-d8ba-43e8-b891-f2dca076594c] Running
	I1101 10:41:00.637787  331938 system_pods.go:61] "kube-controller-manager-old-k8s-version-707467" [3c4a04cb-a001-47b9-b78c-271374ec1444] Running
	I1101 10:41:00.637791  331938 system_pods.go:61] "kube-proxy-2pbws" [f553a3e8-f065-4723-8a39-2fee4a395d45] Running
	I1101 10:41:00.637796  331938 system_pods.go:61] "kube-scheduler-old-k8s-version-707467" [b42a650a-bec0-46e1-b6ab-c1c33a4adea2] Running
	I1101 10:41:00.637807  331938 system_pods.go:61] "storage-provisioner" [476c3eb5-e771-4963-ac52-b3786e841080] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:00.637821  331938 system_pods.go:74] duration metric: took 4.220786ms to wait for pod list to return data ...
	I1101 10:41:00.637836  331938 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:41:00.640053  331938 default_sa.go:45] found service account: "default"
	I1101 10:41:00.640075  331938 default_sa.go:55] duration metric: took 2.231534ms for default service account to be created ...
	I1101 10:41:00.640087  331938 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:41:00.645153  331938 system_pods.go:86] 8 kube-system pods found
	I1101 10:41:00.645181  331938 system_pods.go:89] "coredns-5dd5756b68-9fdk6" [e43bd16e-e22d-4c91-88ec-652fe391b4f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:00.645187  331938 system_pods.go:89] "etcd-old-k8s-version-707467" [ef1fa7c6-d526-427b-bd21-26b22c575da3] Running
	I1101 10:41:00.645193  331938 system_pods.go:89] "kindnet-xxlgz" [cf757ff2-e0ef-43e8-97e9-44b145900bf5] Running
	I1101 10:41:00.645196  331938 system_pods.go:89] "kube-apiserver-old-k8s-version-707467" [fe63902d-d8ba-43e8-b891-f2dca076594c] Running
	I1101 10:41:00.645200  331938 system_pods.go:89] "kube-controller-manager-old-k8s-version-707467" [3c4a04cb-a001-47b9-b78c-271374ec1444] Running
	I1101 10:41:00.645204  331938 system_pods.go:89] "kube-proxy-2pbws" [f553a3e8-f065-4723-8a39-2fee4a395d45] Running
	I1101 10:41:00.645206  331938 system_pods.go:89] "kube-scheduler-old-k8s-version-707467" [b42a650a-bec0-46e1-b6ab-c1c33a4adea2] Running
	I1101 10:41:00.645211  331938 system_pods.go:89] "storage-provisioner" [476c3eb5-e771-4963-ac52-b3786e841080] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:00.645235  331938 retry.go:31] will retry after 227.491894ms: missing components: kube-dns
	I1101 10:41:00.882426  331938 system_pods.go:86] 8 kube-system pods found
	I1101 10:41:00.882472  331938 system_pods.go:89] "coredns-5dd5756b68-9fdk6" [e43bd16e-e22d-4c91-88ec-652fe391b4f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:00.882481  331938 system_pods.go:89] "etcd-old-k8s-version-707467" [ef1fa7c6-d526-427b-bd21-26b22c575da3] Running
	I1101 10:41:00.882488  331938 system_pods.go:89] "kindnet-xxlgz" [cf757ff2-e0ef-43e8-97e9-44b145900bf5] Running
	I1101 10:41:00.882504  331938 system_pods.go:89] "kube-apiserver-old-k8s-version-707467" [fe63902d-d8ba-43e8-b891-f2dca076594c] Running
	I1101 10:41:00.882513  331938 system_pods.go:89] "kube-controller-manager-old-k8s-version-707467" [3c4a04cb-a001-47b9-b78c-271374ec1444] Running
	I1101 10:41:00.883266  331938 system_pods.go:89] "kube-proxy-2pbws" [f553a3e8-f065-4723-8a39-2fee4a395d45] Running
	I1101 10:41:00.883287  331938 system_pods.go:89] "kube-scheduler-old-k8s-version-707467" [b42a650a-bec0-46e1-b6ab-c1c33a4adea2] Running
	I1101 10:41:00.883297  331938 system_pods.go:89] "storage-provisioner" [476c3eb5-e771-4963-ac52-b3786e841080] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:00.883319  331938 retry.go:31] will retry after 288.236401ms: missing components: kube-dns
	I1101 10:41:01.176960  331938 system_pods.go:86] 8 kube-system pods found
	I1101 10:41:01.176997  331938 system_pods.go:89] "coredns-5dd5756b68-9fdk6" [e43bd16e-e22d-4c91-88ec-652fe391b4f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:01.177005  331938 system_pods.go:89] "etcd-old-k8s-version-707467" [ef1fa7c6-d526-427b-bd21-26b22c575da3] Running
	I1101 10:41:01.177018  331938 system_pods.go:89] "kindnet-xxlgz" [cf757ff2-e0ef-43e8-97e9-44b145900bf5] Running
	I1101 10:41:01.177025  331938 system_pods.go:89] "kube-apiserver-old-k8s-version-707467" [fe63902d-d8ba-43e8-b891-f2dca076594c] Running
	I1101 10:41:01.177031  331938 system_pods.go:89] "kube-controller-manager-old-k8s-version-707467" [3c4a04cb-a001-47b9-b78c-271374ec1444] Running
	I1101 10:41:01.177036  331938 system_pods.go:89] "kube-proxy-2pbws" [f553a3e8-f065-4723-8a39-2fee4a395d45] Running
	I1101 10:41:01.177041  331938 system_pods.go:89] "kube-scheduler-old-k8s-version-707467" [b42a650a-bec0-46e1-b6ab-c1c33a4adea2] Running
	I1101 10:41:01.177051  331938 system_pods.go:89] "storage-provisioner" [476c3eb5-e771-4963-ac52-b3786e841080] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:01.177073  331938 retry.go:31] will retry after 341.000125ms: missing components: kube-dns
	I1101 10:41:01.523028  331938 system_pods.go:86] 8 kube-system pods found
	I1101 10:41:01.523062  331938 system_pods.go:89] "coredns-5dd5756b68-9fdk6" [e43bd16e-e22d-4c91-88ec-652fe391b4f1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:01.523069  331938 system_pods.go:89] "etcd-old-k8s-version-707467" [ef1fa7c6-d526-427b-bd21-26b22c575da3] Running
	I1101 10:41:01.523073  331938 system_pods.go:89] "kindnet-xxlgz" [cf757ff2-e0ef-43e8-97e9-44b145900bf5] Running
	I1101 10:41:01.523077  331938 system_pods.go:89] "kube-apiserver-old-k8s-version-707467" [fe63902d-d8ba-43e8-b891-f2dca076594c] Running
	I1101 10:41:01.523081  331938 system_pods.go:89] "kube-controller-manager-old-k8s-version-707467" [3c4a04cb-a001-47b9-b78c-271374ec1444] Running
	I1101 10:41:01.523084  331938 system_pods.go:89] "kube-proxy-2pbws" [f553a3e8-f065-4723-8a39-2fee4a395d45] Running
	I1101 10:41:01.523087  331938 system_pods.go:89] "kube-scheduler-old-k8s-version-707467" [b42a650a-bec0-46e1-b6ab-c1c33a4adea2] Running
	I1101 10:41:01.523101  331938 system_pods.go:89] "storage-provisioner" [476c3eb5-e771-4963-ac52-b3786e841080] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:01.523119  331938 retry.go:31] will retry after 586.647481ms: missing components: kube-dns
	I1101 10:41:02.114592  331938 system_pods.go:86] 8 kube-system pods found
	I1101 10:41:02.114617  331938 system_pods.go:89] "coredns-5dd5756b68-9fdk6" [e43bd16e-e22d-4c91-88ec-652fe391b4f1] Running
	I1101 10:41:02.114623  331938 system_pods.go:89] "etcd-old-k8s-version-707467" [ef1fa7c6-d526-427b-bd21-26b22c575da3] Running
	I1101 10:41:02.114627  331938 system_pods.go:89] "kindnet-xxlgz" [cf757ff2-e0ef-43e8-97e9-44b145900bf5] Running
	I1101 10:41:02.114630  331938 system_pods.go:89] "kube-apiserver-old-k8s-version-707467" [fe63902d-d8ba-43e8-b891-f2dca076594c] Running
	I1101 10:41:02.114635  331938 system_pods.go:89] "kube-controller-manager-old-k8s-version-707467" [3c4a04cb-a001-47b9-b78c-271374ec1444] Running
	I1101 10:41:02.114638  331938 system_pods.go:89] "kube-proxy-2pbws" [f553a3e8-f065-4723-8a39-2fee4a395d45] Running
	I1101 10:41:02.114645  331938 system_pods.go:89] "kube-scheduler-old-k8s-version-707467" [b42a650a-bec0-46e1-b6ab-c1c33a4adea2] Running
	I1101 10:41:02.114649  331938 system_pods.go:89] "storage-provisioner" [476c3eb5-e771-4963-ac52-b3786e841080] Running
	I1101 10:41:02.114657  331938 system_pods.go:126] duration metric: took 1.474563398s to wait for k8s-apps to be running ...
	I1101 10:41:02.114666  331938 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:41:02.114708  331938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:41:02.128232  331938 system_svc.go:56] duration metric: took 13.552793ms WaitForService to wait for kubelet
	I1101 10:41:02.128280  331938 kubeadm.go:587] duration metric: took 15.507992588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:41:02.128308  331938 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:41:02.131121  331938 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:41:02.131148  331938 node_conditions.go:123] node cpu capacity is 8
	I1101 10:41:02.131161  331938 node_conditions.go:105] duration metric: took 2.848587ms to run NodePressure ...
	I1101 10:41:02.131174  331938 start.go:242] waiting for startup goroutines ...
	I1101 10:41:02.131184  331938 start.go:247] waiting for cluster config update ...
	I1101 10:41:02.131208  331938 start.go:256] writing updated cluster config ...
	I1101 10:41:02.131477  331938 ssh_runner.go:195] Run: rm -f paused
	I1101 10:41:02.135593  331938 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:41:02.140532  331938 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-9fdk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:02.145362  331938 pod_ready.go:94] pod "coredns-5dd5756b68-9fdk6" is "Ready"
	I1101 10:41:02.145389  331938 pod_ready.go:86] duration metric: took 4.824381ms for pod "coredns-5dd5756b68-9fdk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:02.148099  331938 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:02.152248  331938 pod_ready.go:94] pod "etcd-old-k8s-version-707467" is "Ready"
	I1101 10:41:02.152271  331938 pod_ready.go:86] duration metric: took 4.15096ms for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:02.154993  331938 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:02.158847  331938 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-707467" is "Ready"
	I1101 10:41:02.158879  331938 pod_ready.go:86] duration metric: took 3.864661ms for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:02.161380  331938 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:02.541100  331938 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-707467" is "Ready"
	I1101 10:41:02.541136  331938 pod_ready.go:86] duration metric: took 379.729492ms for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:02.742296  331938 pod_ready.go:83] waiting for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:03.140047  331938 pod_ready.go:94] pod "kube-proxy-2pbws" is "Ready"
	I1101 10:41:03.140075  331938 pod_ready.go:86] duration metric: took 397.744092ms for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:03.342109  331938 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:03.740306  331938 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-707467" is "Ready"
	I1101 10:41:03.740333  331938 pod_ready.go:86] duration metric: took 398.198592ms for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:03.740344  331938 pod_ready.go:40] duration metric: took 1.60471309s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:41:03.788661  331938 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 10:41:03.790427  331938 out.go:203] 
	W1101 10:41:03.791573  331938 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:41:03.792740  331938 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:41:03.794184  331938 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-707467" cluster and "default" namespace by default
	I1101 10:41:06.405325  340100 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:41:06.405405  340100 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:41:06.405630  340100 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:41:06.405757  340100 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:41:06.405818  340100 kubeadm.go:319] OS: Linux
	I1101 10:41:06.405891  340100 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:41:06.405957  340100 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:41:06.406016  340100 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:41:06.406075  340100 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:41:06.406133  340100 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:41:06.406219  340100 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:41:06.406280  340100 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:41:06.406333  340100 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:41:06.406422  340100 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:41:06.406554  340100 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:41:06.406670  340100 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:41:06.406756  340100 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:41:06.410829  340100 out.go:252]   - Generating certificates and keys ...
	I1101 10:41:06.410968  340100 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:41:06.411072  340100 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:41:06.411167  340100 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:41:06.411231  340100 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:41:06.411308  340100 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:41:06.411372  340100 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:41:06.411446  340100 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:41:06.411613  340100 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-753486] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:41:06.411684  340100 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:41:06.411848  340100 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-753486] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1101 10:41:06.411935  340100 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:41:06.412027  340100 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:41:06.412089  340100 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:41:06.412166  340100 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:41:06.412236  340100 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:41:06.412318  340100 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:41:06.412389  340100 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:41:06.412484  340100 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:41:06.412604  340100 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:41:06.412712  340100 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:41:06.412801  340100 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:41:06.458063  340100 out.go:252]   - Booting up control plane ...
	I1101 10:41:06.458214  340100 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:41:06.458344  340100 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:41:06.458434  340100 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:41:06.458580  340100 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:41:06.458766  340100 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:41:06.458917  340100 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:41:06.459056  340100 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:41:06.459114  340100 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:41:06.459312  340100 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:41:06.459433  340100 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:41:06.459527  340100 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.662355ms
	I1101 10:41:06.459648  340100 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:41:06.459736  340100 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1101 10:41:06.459902  340100 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:41:06.460045  340100 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:41:06.460170  340100 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.291511441s
	I1101 10:41:06.460263  340100 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.831538544s
	I1101 10:41:06.460354  340100 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501097627s
	I1101 10:41:06.460516  340100 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:41:06.460690  340100 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:41:06.460787  340100 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:41:06.461059  340100 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-753486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:41:06.461137  340100 kubeadm.go:319] [bootstrap-token] Using token: w70kw5.y1faoq78jbloeb2m
	I1101 10:41:06.521548  340100 out.go:252]   - Configuring RBAC rules ...
	I1101 10:41:06.521694  340100 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:41:06.521776  340100 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:41:06.521946  340100 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:41:06.522066  340100 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:41:06.522198  340100 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:41:06.522327  340100 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:41:06.522520  340100 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:41:06.522599  340100 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:41:06.522670  340100 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:41:06.522679  340100 kubeadm.go:319] 
	I1101 10:41:06.522763  340100 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:41:06.522773  340100 kubeadm.go:319] 
	I1101 10:41:06.522900  340100 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:41:06.522912  340100 kubeadm.go:319] 
	I1101 10:41:06.522947  340100 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:41:06.523036  340100 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:41:06.523126  340100 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:41:06.523141  340100 kubeadm.go:319] 
	I1101 10:41:06.523205  340100 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:41:06.523211  340100 kubeadm.go:319] 
	I1101 10:41:06.523258  340100 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:41:06.523264  340100 kubeadm.go:319] 
	I1101 10:41:06.523308  340100 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:41:06.523377  340100 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:41:06.523440  340100 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:41:06.523448  340100 kubeadm.go:319] 
	I1101 10:41:06.523539  340100 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:41:06.523608  340100 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:41:06.523615  340100 kubeadm.go:319] 
	I1101 10:41:06.523729  340100 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w70kw5.y1faoq78jbloeb2m \
	I1101 10:41:06.523886  340100 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:940bb8e1f96ef3c88df818902bd8202f25d19108c9c93fa4896a1f509b4cfb64 \
	I1101 10:41:06.523930  340100 kubeadm.go:319] 	--control-plane 
	I1101 10:41:06.523940  340100 kubeadm.go:319] 
	I1101 10:41:06.524087  340100 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:41:06.524103  340100 kubeadm.go:319] 
	I1101 10:41:06.524210  340100 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w70kw5.y1faoq78jbloeb2m \
	I1101 10:41:06.524390  340100 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:940bb8e1f96ef3c88df818902bd8202f25d19108c9c93fa4896a1f509b4cfb64 
	I1101 10:41:06.524408  340100 cni.go:84] Creating CNI manager for ""
	I1101 10:41:06.524418  340100 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:41:06.544891  340100 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:41:02.897900  349346 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:41:02.898111  349346 start.go:159] libmachine.API.Create for "embed-certs-071527" (driver="docker")
	I1101 10:41:02.898139  349346 client.go:173] LocalClient.Create starting
	I1101 10:41:02.898197  349346 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem
	I1101 10:41:02.898235  349346 main.go:143] libmachine: Decoding PEM data...
	I1101 10:41:02.898264  349346 main.go:143] libmachine: Parsing certificate...
	I1101 10:41:02.898332  349346 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem
	I1101 10:41:02.898354  349346 main.go:143] libmachine: Decoding PEM data...
	I1101 10:41:02.898362  349346 main.go:143] libmachine: Parsing certificate...
	I1101 10:41:02.898687  349346 cli_runner.go:164] Run: docker network inspect embed-certs-071527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:41:02.917254  349346 cli_runner.go:211] docker network inspect embed-certs-071527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:41:02.917335  349346 network_create.go:284] running [docker network inspect embed-certs-071527] to gather additional debugging logs...
	I1101 10:41:02.917355  349346 cli_runner.go:164] Run: docker network inspect embed-certs-071527
	W1101 10:41:02.935371  349346 cli_runner.go:211] docker network inspect embed-certs-071527 returned with exit code 1
	I1101 10:41:02.935415  349346 network_create.go:287] error running [docker network inspect embed-certs-071527]: docker network inspect embed-certs-071527: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-071527 not found
	I1101 10:41:02.935430  349346 network_create.go:289] output of [docker network inspect embed-certs-071527]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-071527 not found
	
	** /stderr **
	I1101 10:41:02.935617  349346 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:41:02.954424  349346 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ac7093b735a5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:19:58:44:be:58} reservation:<nil>}
	I1101 10:41:02.955167  349346 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2c03ebffc507 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:41:85:56:13:7f} reservation:<nil>}
	I1101 10:41:02.956088  349346 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abee7b1ad47f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:f3:16:31:10:75} reservation:<nil>}
	I1101 10:41:02.957054  349346 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c478900afbe4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0e:a3:82:53:bf:5c} reservation:<nil>}
	I1101 10:41:02.957615  349346 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0d84c48ff1a5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:1e:c4:9d:c8:ea:2b} reservation:<nil>}
	I1101 10:41:02.958053  349346 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-415a138baf91 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:06:f1:0d:12:86:af} reservation:<nil>}
	I1101 10:41:02.958824  349346 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fb3d00}
	I1101 10:41:02.958849  349346 network_create.go:124] attempt to create docker network embed-certs-071527 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1101 10:41:02.958904  349346 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-071527 embed-certs-071527
	I1101 10:41:03.023040  349346 network_create.go:108] docker network embed-certs-071527 192.168.103.0/24 created
	I1101 10:41:03.023070  349346 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-071527" container
	I1101 10:41:03.023140  349346 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:41:03.042131  349346 cli_runner.go:164] Run: docker volume create embed-certs-071527 --label name.minikube.sigs.k8s.io=embed-certs-071527 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:41:03.063083  349346 oci.go:103] Successfully created a docker volume embed-certs-071527
	I1101 10:41:03.063185  349346 cli_runner.go:164] Run: docker run --rm --name embed-certs-071527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-071527 --entrypoint /usr/bin/test -v embed-certs-071527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:41:03.520142  349346 oci.go:107] Successfully prepared a docker volume embed-certs-071527
	I1101 10:41:03.520186  349346 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:41:03.520209  349346 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:41:03.520291  349346 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-071527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 10:41:06.581629  340100 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:41:06.586434  340100 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:41:06.586454  340100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:41:06.603976  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:41:07.885056  340100 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.281042723s)
	I1101 10:41:07.885113  340100 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:41:07.885218  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:07.885234  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-753486 minikube.k8s.io/updated_at=2025_11_01T10_41_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=no-preload-753486 minikube.k8s.io/primary=true
	I1101 10:41:07.895666  340100 ops.go:34] apiserver oom_adj: -16
	I1101 10:41:08.129718  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:08.630768  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:09.130558  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:09.630214  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:10.130169  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:10.630726  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:11.130671  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:11.630290  340100 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:11.718920  340100 kubeadm.go:1114] duration metric: took 3.833785286s to wait for elevateKubeSystemPrivileges
	I1101 10:41:11.719038  340100 kubeadm.go:403] duration metric: took 16.71983389s to StartCluster
	I1101 10:41:11.719286  340100 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:11.719595  340100 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:41:11.721735  340100 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:11.722013  340100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:41:11.722042  340100 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:41:11.722124  340100 addons.go:70] Setting storage-provisioner=true in profile "no-preload-753486"
	I1101 10:41:11.722149  340100 addons.go:239] Setting addon storage-provisioner=true in "no-preload-753486"
	I1101 10:41:11.722453  340100 host.go:66] Checking if "no-preload-753486" exists ...
	I1101 10:41:11.722020  340100 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:41:11.722229  340100 addons.go:70] Setting default-storageclass=true in profile "no-preload-753486"
	I1101 10:41:11.722679  340100 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-753486"
	I1101 10:41:11.722237  340100 config.go:182] Loaded profile config "no-preload-753486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:11.723023  340100 cli_runner.go:164] Run: docker container inspect no-preload-753486 --format={{.State.Status}}
	I1101 10:41:11.724032  340100 out.go:179] * Verifying Kubernetes components...
	I1101 10:41:11.724121  340100 cli_runner.go:164] Run: docker container inspect no-preload-753486 --format={{.State.Status}}
	I1101 10:41:11.725719  340100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:11.761818  340100 addons.go:239] Setting addon default-storageclass=true in "no-preload-753486"
	I1101 10:41:11.761868  340100 host.go:66] Checking if "no-preload-753486" exists ...
	I1101 10:41:11.763682  340100 cli_runner.go:164] Run: docker container inspect no-preload-753486 --format={{.State.Status}}
	I1101 10:41:11.764470  340100 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:41:08.199187  349346 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-071527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.678834189s)
	I1101 10:41:08.199225  349346 kic.go:203] duration metric: took 4.679012011s to extract preloaded images to volume ...
	W1101 10:41:08.199352  349346 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:41:08.199399  349346 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:41:08.199449  349346 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:41:08.259706  349346 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-071527 --name embed-certs-071527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-071527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-071527 --network embed-certs-071527 --ip 192.168.103.2 --volume embed-certs-071527:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:41:08.546795  349346 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Running}}
	I1101 10:41:08.565630  349346 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:41:08.583925  349346 cli_runner.go:164] Run: docker exec embed-certs-071527 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:41:08.635393  349346 oci.go:144] the created container "embed-certs-071527" has a running status.
	I1101 10:41:08.635434  349346 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa...
	I1101 10:41:08.994670  349346 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:41:09.020307  349346 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:41:09.038136  349346 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:41:09.038171  349346 kic_runner.go:114] Args: [docker exec --privileged embed-certs-071527 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:41:09.086058  349346 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:41:09.105732  349346 machine.go:94] provisionDockerMachine start ...
	I1101 10:41:09.105899  349346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:41:09.123478  349346 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:09.123762  349346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1101 10:41:09.123781  349346 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:41:09.269852  349346 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:41:09.269883  349346 ubuntu.go:182] provisioning hostname "embed-certs-071527"
	I1101 10:41:09.269946  349346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:41:09.288844  349346 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:09.289061  349346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1101 10:41:09.289077  349346 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-071527 && echo "embed-certs-071527" | sudo tee /etc/hostname
	I1101 10:41:09.439309  349346 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:41:09.439393  349346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:41:09.457650  349346 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:09.457895  349346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1101 10:41:09.457926  349346 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-071527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-071527/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-071527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:41:09.598404  349346 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:41:09.598434  349346 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:41:09.598484  349346 ubuntu.go:190] setting up certificates
	I1101 10:41:09.598516  349346 provision.go:84] configureAuth start
	I1101 10:41:09.598585  349346 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:41:09.615540  349346 provision.go:143] copyHostCerts
	I1101 10:41:09.615611  349346 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:41:09.615627  349346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:41:09.615710  349346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:41:09.615837  349346 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:41:09.615851  349346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:41:09.615894  349346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:41:09.615978  349346 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:41:09.615988  349346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:41:09.616026  349346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:41:09.616113  349346 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-071527 san=[127.0.0.1 192.168.103.2 embed-certs-071527 localhost minikube]
	I1101 10:41:10.031275  349346 provision.go:177] copyRemoteCerts
	I1101 10:41:10.031350  349346 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:41:10.031393  349346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:41:10.049217  349346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:41:10.151067  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:41:10.172457  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 10:41:10.191307  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:41:10.215782  349346 provision.go:87] duration metric: took 617.247101ms to configureAuth
	I1101 10:41:10.215818  349346 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:41:10.216047  349346 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:10.216201  349346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:41:10.237105  349346 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:10.237370  349346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1101 10:41:10.237388  349346 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:41:10.494884  349346 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:41:10.494913  349346 machine.go:97] duration metric: took 1.389148439s to provisionDockerMachine
	I1101 10:41:10.494928  349346 client.go:176] duration metric: took 7.596779908s to LocalClient.Create
	I1101 10:41:10.494961  349346 start.go:167] duration metric: took 7.596850305s to libmachine.API.Create "embed-certs-071527"
	I1101 10:41:10.494971  349346 start.go:293] postStartSetup for "embed-certs-071527" (driver="docker")
	I1101 10:41:10.494983  349346 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:41:10.495050  349346 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:41:10.495100  349346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:41:10.513262  349346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:41:10.615161  349346 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:41:10.618826  349346 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:41:10.618853  349346 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:41:10.618864  349346 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:41:10.618914  349346 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:41:10.618981  349346 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:41:10.619073  349346 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:41:10.627129  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:41:10.652156  349346 start.go:296] duration metric: took 157.169825ms for postStartSetup
	I1101 10:41:10.652603  349346 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:41:10.671897  349346 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:41:10.672148  349346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:41:10.672189  349346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:41:10.690179  349346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:41:10.788613  349346 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:41:10.793523  349346 start.go:128] duration metric: took 7.897541193s to createHost
	I1101 10:41:10.793555  349346 start.go:83] releasing machines lock for "embed-certs-071527", held for 7.897687688s
	I1101 10:41:10.793629  349346 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:41:10.810299  349346 ssh_runner.go:195] Run: cat /version.json
	I1101 10:41:10.810344  349346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:41:10.810354  349346 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:41:10.810428  349346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:41:10.829349  349346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:41:10.829637  349346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:41:10.927296  349346 ssh_runner.go:195] Run: systemctl --version
	I1101 10:41:10.987750  349346 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:41:11.023371  349346 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:41:11.028034  349346 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:41:11.028104  349346 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:41:11.057943  349346 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:41:11.057971  349346 start.go:496] detecting cgroup driver to use...
	I1101 10:41:11.058007  349346 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:41:11.058059  349346 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:41:11.076799  349346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:41:11.089567  349346 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:41:11.089636  349346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:41:11.108439  349346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:41:11.126409  349346 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:41:11.223393  349346 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:41:11.331215  349346 docker.go:234] disabling docker service ...
	I1101 10:41:11.331283  349346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:41:11.355852  349346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:41:11.370659  349346 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:41:11.468749  349346 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:41:11.567604  349346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:41:11.581238  349346 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:41:11.598788  349346 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:41:11.598850  349346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:11.609924  349346 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:41:11.609989  349346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:11.619256  349346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:11.629885  349346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:11.643202  349346 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:41:11.654328  349346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:11.664886  349346 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:11.683081  349346 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:11.696583  349346 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:41:11.707970  349346 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:41:11.719519  349346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:11.870732  349346 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:41:12.002985  349346 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:41:12.003088  349346 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:41:12.008862  349346 start.go:564] Will wait 60s for crictl version
	I1101 10:41:12.008922  349346 ssh_runner.go:195] Run: which crictl
	I1101 10:41:12.015002  349346 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:41:12.051767  349346 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:41:12.051848  349346 ssh_runner.go:195] Run: crio --version
	I1101 10:41:12.092880  349346 ssh_runner.go:195] Run: crio --version
	I1101 10:41:12.150992  349346 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:41:11.771034  340100 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:41:11.771058  340100 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:41:11.771198  340100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753486
	I1101 10:41:11.812998  340100 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:41:11.813038  340100 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:41:11.813105  340100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753486
	I1101 10:41:11.816085  340100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/no-preload-753486/id_rsa Username:docker}
	I1101 10:41:11.837268  340100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/no-preload-753486/id_rsa Username:docker}
	I1101 10:41:11.845999  340100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:41:11.926710  340100 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:41:11.946733  340100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:41:11.961831  340100 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:41:12.053850  340100 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1101 10:41:12.058310  340100 node_ready.go:35] waiting up to 6m0s for node "no-preload-753486" to be "Ready" ...
	I1101 10:41:12.336947  340100 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:41:12.152543  349346 cli_runner.go:164] Run: docker network inspect embed-certs-071527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:41:12.174313  349346 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 10:41:12.180407  349346 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:41:12.197157  349346 kubeadm.go:884] updating cluster {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:41:12.197279  349346 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:41:12.197382  349346 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:41:12.239676  349346 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:41:12.239698  349346 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:41:12.239754  349346 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:41:12.274545  349346 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:41:12.274573  349346 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:41:12.274583  349346 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 10:41:12.274701  349346 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-071527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:41:12.274889  349346 ssh_runner.go:195] Run: crio config
	I1101 10:41:12.331804  349346 cni.go:84] Creating CNI manager for ""
	I1101 10:41:12.331834  349346 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:41:12.331854  349346 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:41:12.331883  349346 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-071527 NodeName:embed-certs-071527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:41:12.332055  349346 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-071527"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:41:12.332128  349346 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:41:12.341546  349346 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:41:12.341602  349346 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:41:12.349395  349346 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1101 10:41:12.362639  349346 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:41:12.377984  349346 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1101 10:41:12.392115  349346 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:41:12.395908  349346 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:41:12.406362  349346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:12.502067  349346 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:41:12.526458  349346 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527 for IP: 192.168.103.2
	I1101 10:41:12.526476  349346 certs.go:195] generating shared ca certs ...
	I1101 10:41:12.526517  349346 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:12.526637  349346 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:41:12.526668  349346 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:41:12.526675  349346 certs.go:257] generating profile certs ...
	I1101 10:41:12.526725  349346 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.key
	I1101 10:41:12.526736  349346 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.crt with IP's: []
	I1101 10:41:12.568256  349346 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.crt ...
	I1101 10:41:12.568294  349346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.crt: {Name:mk87cdb33d016c84e6907c37dc36a58be69dfebc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:12.568555  349346 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.key ...
	I1101 10:41:12.568578  349346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.key: {Name:mk37a77fbd60d50a1794bdce175cfdd15aad94b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:12.568721  349346 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key.afddc8c1
	I1101 10:41:12.568744  349346 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt.afddc8c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1101 10:41:12.337973  340100 addons.go:515] duration metric: took 615.928607ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:41:12.561477  340100 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-753486" context rescaled to 1 replicas
	W1101 10:41:14.062023  340100 node_ready.go:57] node "no-preload-753486" has "Ready":"False" status (will retry)
	I1101 10:41:12.836226  349346 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt.afddc8c1 ...
	I1101 10:41:12.836261  349346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt.afddc8c1: {Name:mka8fec621652a02b452146020cde0fe525114e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:12.836521  349346 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key.afddc8c1 ...
	I1101 10:41:12.836558  349346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key.afddc8c1: {Name:mkc8915d95341870141e008bbce74af13bbe2ca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:12.836733  349346 certs.go:382] copying /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt.afddc8c1 -> /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt
	I1101 10:41:12.836838  349346 certs.go:386] copying /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key.afddc8c1 -> /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key
	I1101 10:41:12.836922  349346 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key
	I1101 10:41:12.836945  349346 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.crt with IP's: []
	I1101 10:41:13.977416  349346 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.crt ...
	I1101 10:41:13.977449  349346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.crt: {Name:mk59d7853a82457faca35d80a44f14edb7fec6da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:13.977662  349346 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key ...
	I1101 10:41:13.977685  349346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key: {Name:mk162e50da8dc1bad7a8e9392dcde9293bd57f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:13.977979  349346 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:41:13.978033  349346 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:41:13.978047  349346 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:41:13.978088  349346 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:41:13.978123  349346 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:41:13.978159  349346 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:41:13.978218  349346 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:41:13.979014  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:41:14.004780  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:41:14.026792  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:41:14.049182  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:41:14.072264  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:41:14.097331  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:41:14.122697  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:41:14.150229  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:41:14.175722  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:41:14.204262  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:41:14.226291  349346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:41:14.247642  349346 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:41:14.262548  349346 ssh_runner.go:195] Run: openssl version
	I1101 10:41:14.270815  349346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:41:14.280768  349346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:41:14.285223  349346 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:41:14.285283  349346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:41:14.332940  349346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:41:14.342998  349346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:41:14.352372  349346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:14.357304  349346 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:14.357365  349346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:14.400714  349346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:41:14.414207  349346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:41:14.426069  349346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:41:14.432112  349346 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:41:14.432201  349346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:41:14.479910  349346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:41:14.491058  349346 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:41:14.496053  349346 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:41:14.496113  349346 kubeadm.go:401] StartCluster: {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:14.496198  349346 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:41:14.496258  349346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:41:14.527956  349346 cri.go:89] found id: ""
	I1101 10:41:14.528032  349346 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:41:14.536760  349346 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:41:14.546036  349346 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:41:14.546088  349346 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:41:14.555329  349346 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:41:14.555350  349346 kubeadm.go:158] found existing configuration files:
	
	I1101 10:41:14.555391  349346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:41:14.564533  349346 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:41:14.564614  349346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:41:14.573424  349346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:41:14.582626  349346 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:41:14.582690  349346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:41:14.591210  349346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:41:14.599826  349346 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:41:14.599879  349346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:41:14.608636  349346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:41:14.619202  349346 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:41:14.619257  349346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:41:14.627736  349346 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:41:14.672539  349346 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:41:14.672612  349346 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:41:14.695480  349346 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:41:14.695590  349346 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:41:14.695644  349346 kubeadm.go:319] OS: Linux
	I1101 10:41:14.695735  349346 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:41:14.695835  349346 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:41:14.695942  349346 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:41:14.696022  349346 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:41:14.696165  349346 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:41:14.696251  349346 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:41:14.696332  349346 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:41:14.696397  349346 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:41:14.785440  349346 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:41:14.785985  349346 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:41:14.786219  349346 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:41:14.800295  349346 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Nov 01 10:41:00 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:00.755384674Z" level=info msg="Starting container: 4e447aa8eda6984e24eb6e51189f72b52c4774663a675fd75d610b96ee9531a5" id=0b18ae74-14c3-44bb-99b5-f7a6e83b96fa name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:41:00 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:00.75760258Z" level=info msg="Started container" PID=2127 containerID=4e447aa8eda6984e24eb6e51189f72b52c4774663a675fd75d610b96ee9531a5 description=kube-system/coredns-5dd5756b68-9fdk6/coredns id=0b18ae74-14c3-44bb-99b5-f7a6e83b96fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=d103db8550b6a7ac611d133257aaa8cf984ae70f47d2e97c6525d99c2f336169
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.266080044Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d8509e09-5e6a-4207-80fa-4457e1834ef6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.26622458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.272633591Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a79752648b4048f0ed2c2839605e59d01aa4a5843c3a5431a05b5347ed0f89a1 UID:19c1aad2-c5a5-4e04-b902-4eb808a4b2de NetNS:/var/run/netns/7b055df4-cefe-436d-818a-ec662d8b524f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009112a8}] Aliases:map[]}"
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.272668895Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.284270098Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a79752648b4048f0ed2c2839605e59d01aa4a5843c3a5431a05b5347ed0f89a1 UID:19c1aad2-c5a5-4e04-b902-4eb808a4b2de NetNS:/var/run/netns/7b055df4-cefe-436d-818a-ec662d8b524f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009112a8}] Aliases:map[]}"
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.284475907Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.285587946Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.286698642Z" level=info msg="Ran pod sandbox a79752648b4048f0ed2c2839605e59d01aa4a5843c3a5431a05b5347ed0f89a1 with infra container: default/busybox/POD" id=d8509e09-5e6a-4207-80fa-4457e1834ef6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.288017705Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=78d61e64-1484-40fb-ba34-f2612a589240 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.28817267Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=78d61e64-1484-40fb-ba34-f2612a589240 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.28822978Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=78d61e64-1484-40fb-ba34-f2612a589240 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.288886703Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99117bf5-93af-4c86-bdeb-64a76206a3da name=/runtime.v1.ImageService/PullImage
	Nov 01 10:41:04 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:04.290934118Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:41:08 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:08.175608729Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=99117bf5-93af-4c86-bdeb-64a76206a3da name=/runtime.v1.ImageService/PullImage
	Nov 01 10:41:08 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:08.176995177Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0b1ceaed-258e-4db7-8b00-cb113dafe394 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:08 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:08.178914974Z" level=info msg="Creating container: default/busybox/busybox" id=5ab321d7-5684-4e49-aba8-3124944e4f22 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:41:08 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:08.179064315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:08 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:08.188776924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:08 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:08.189925328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:08 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:08.218209584Z" level=info msg="Created container 4499de89a05a89ebbbcc936eaff4ab0ac10f7bd326204db5205f269b5efb7af9: default/busybox/busybox" id=5ab321d7-5684-4e49-aba8-3124944e4f22 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:41:08 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:08.219167741Z" level=info msg="Starting container: 4499de89a05a89ebbbcc936eaff4ab0ac10f7bd326204db5205f269b5efb7af9" id=c1729fd6-a186-409d-80a5-d16d4bcf9c13 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:41:08 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:08.221805516Z" level=info msg="Started container" PID=2203 containerID=4499de89a05a89ebbbcc936eaff4ab0ac10f7bd326204db5205f269b5efb7af9 description=default/busybox/busybox id=c1729fd6-a186-409d-80a5-d16d4bcf9c13 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a79752648b4048f0ed2c2839605e59d01aa4a5843c3a5431a05b5347ed0f89a1
	Nov 01 10:41:14 old-k8s-version-707467 crio[768]: time="2025-11-01T10:41:14.080779195Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	4499de89a05a8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   a79752648b404       busybox                                          default
	4e447aa8eda69       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   d103db8550b6a       coredns-5dd5756b68-9fdk6                         kube-system
	18f4ea1377348       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   4707ed0ca2ebd       storage-provisioner                              kube-system
	f3588cc366653       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   e48b6be1f0387       kindnet-xxlgz                                    kube-system
	d4738cc7c5677       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      28 seconds ago      Running             kube-proxy                0                   2949eedfd7bae       kube-proxy-2pbws                                 kube-system
	b827fc54a57c0       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      47 seconds ago      Running             kube-apiserver            0                   9884df4fa7200       kube-apiserver-old-k8s-version-707467            kube-system
	15101e7e15e5d       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      47 seconds ago      Running             kube-controller-manager   0                   1d59728361045       kube-controller-manager-old-k8s-version-707467   kube-system
	f7799fe05b6f4       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      47 seconds ago      Running             kube-scheduler            0                   c9d0fe93542e5       kube-scheduler-old-k8s-version-707467            kube-system
	44962c9a26cc1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      47 seconds ago      Running             etcd                      0                   8f6f1ddeb472f       etcd-old-k8s-version-707467                      kube-system
	
	
	==> coredns [4e447aa8eda6984e24eb6e51189f72b52c4774663a675fd75d610b96ee9531a5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53441 - 56678 "HINFO IN 6867039138722661911.985343008031825627. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.04014529s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-707467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-707467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=old-k8s-version-707467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_40_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:40:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-707467
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:41:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:41:04 +0000   Sat, 01 Nov 2025 10:40:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:41:04 +0000   Sat, 01 Nov 2025 10:40:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:41:04 +0000   Sat, 01 Nov 2025 10:40:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:41:04 +0000   Sat, 01 Nov 2025 10:41:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-707467
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                c2fc43a3-538e-4e6c-a223-e8844e524c0a
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-9fdk6                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-707467                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-xxlgz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-707467             250m (3%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-old-k8s-version-707467    200m (2%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-2pbws                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-707467             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 42s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s   kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s   kubelet          Node old-k8s-version-707467 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s   kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-707467 event: Registered Node old-k8s-version-707467 in Controller
	  Normal  NodeReady                15s   kubelet          Node old-k8s-version-707467 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[Nov 1 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	
	
	==> etcd [44962c9a26cc180fc74937854fb4ec55f17762e54e5f2275f52ff9c6d973b0d5] <==
	{"level":"info","ts":"2025-11-01T10:40:28.896808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:40:28.896838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-01T10:40:28.896853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-01T10:40:28.896869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-01T10:40:28.897606Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-707467 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:40:28.897654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:40:28.897742Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:40:28.897769Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:40:28.900879Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:40:28.901133Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:40:28.901162Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:40:28.901181Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:40:28.901234Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:40:28.901224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:40:28.902281Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-01T10:40:34.785198Z","caller":"traceutil/trace.go:171","msg":"trace[150068586] transaction","detail":"{read_only:false; response_revision:252; number_of_response:1; }","duration":"105.614871ms","start":"2025-11-01T10:40:34.679365Z","end":"2025-11-01T10:40:34.78498Z","steps":["trace[150068586] 'process raft request'  (duration: 27.798698ms)","trace[150068586] 'compare'  (duration: 77.619766ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:40:34.93388Z","caller":"traceutil/trace.go:171","msg":"trace[860556897] transaction","detail":"{read_only:false; response_revision:253; number_of_response:1; }","duration":"132.783252ms","start":"2025-11-01T10:40:34.801076Z","end":"2025-11-01T10:40:34.93386Z","steps":["trace[860556897] 'process raft request'  (duration: 132.299176ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:40:50.832916Z","caller":"traceutil/trace.go:171","msg":"trace[1048875779] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"176.994312ms","start":"2025-11-01T10:40:50.655903Z","end":"2025-11-01T10:40:50.832897Z","steps":["trace[1048875779] 'process raft request'  (duration: 169.148341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:01.837448Z","caller":"traceutil/trace.go:171","msg":"trace[1879000243] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"148.641762ms","start":"2025-11-01T10:41:01.688787Z","end":"2025-11-01T10:41:01.837429Z","steps":["trace[1879000243] 'process raft request'  (duration: 124.078942ms)","trace[1879000243] 'compare'  (duration: 24.454648ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:41:01.876553Z","caller":"traceutil/trace.go:171","msg":"trace[1845917906] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"187.361946ms","start":"2025-11-01T10:41:01.689174Z","end":"2025-11-01T10:41:01.876536Z","steps":["trace[1845917906] 'process raft request'  (duration: 187.181038ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:01.876728Z","caller":"traceutil/trace.go:171","msg":"trace[631745848] linearizableReadLoop","detail":"{readStateIndex:418; appliedIndex:415; }","duration":"172.331503ms","start":"2025-11-01T10:41:01.704377Z","end":"2025-11-01T10:41:01.876709Z","steps":["trace[631745848] 'read index received'  (duration: 108.608044ms)","trace[631745848] 'applied index is now lower than readState.Index'  (duration: 63.721322ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:41:01.876808Z","caller":"traceutil/trace.go:171","msg":"trace[1990263407] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"182.692193ms","start":"2025-11-01T10:41:01.694101Z","end":"2025-11-01T10:41:01.876793Z","steps":["trace[1990263407] 'process raft request'  (duration: 182.368105ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:41:01.876867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.467977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.94.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-11-01T10:41:01.876942Z","caller":"traceutil/trace.go:171","msg":"trace[1410265362] range","detail":"{range_begin:/registry/masterleases/192.168.94.2; range_end:; response_count:1; response_revision:402; }","duration":"172.573216ms","start":"2025-11-01T10:41:01.70434Z","end":"2025-11-01T10:41:01.876913Z","steps":["trace[1410265362] 'agreement among raft nodes before linearized reading'  (duration: 172.434205ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:06.926361Z","caller":"traceutil/trace.go:171","msg":"trace[1714291810] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"105.585027ms","start":"2025-11-01T10:41:06.820757Z","end":"2025-11-01T10:41:06.926342Z","steps":["trace[1714291810] 'process raft request'  (duration: 105.43744ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:41:15 up  2:23,  0 user,  load average: 5.00, 3.71, 2.34
	Linux old-k8s-version-707467 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3588cc3666536953b08bbaf111da18336d919634f3794243aabb9fd41258161] <==
	I1101 10:40:49.816155       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:40:49.910986       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 10:40:49.911159       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:40:49.911178       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:40:49.911189       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:40:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:40:50.115197       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:40:50.116165       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:40:50.210982       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:40:50.211204       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:40:50.611537       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:40:50.611632       1 metrics.go:72] Registering metrics
	I1101 10:40:50.611704       1 controller.go:711] "Syncing nftables rules"
	I1101 10:41:00.123945       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:41:00.124015       1 main.go:301] handling current node
	I1101 10:41:10.116152       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:41:10.116182       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b827fc54a57c05861d276a263830ad87798d20a37e43c5a5ed4e5c8ba00655d6] <==
	I1101 10:40:30.094488       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:40:30.094613       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 10:40:30.094639       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:40:30.094652       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:40:30.094659       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:40:30.094666       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:40:30.095633       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 10:40:30.096676       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:40:30.103821       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:40:30.145003       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:40:31.000174       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:40:31.004517       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:40:31.004541       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:40:31.516025       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:40:31.569196       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:40:31.708786       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:40:31.714380       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1101 10:40:31.715343       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 10:40:31.718930       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:40:32.031432       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:40:33.415374       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:40:33.427565       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:40:33.444916       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 10:40:46.276716       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:40:46.418537       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [15101e7e15e5d48876dab3958199169b685a3392b4f9529b2e9298c520c3ef41] <==
	I1101 10:40:45.965612       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1101 10:40:45.973472       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 10:40:46.014107       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1101 10:40:46.299821       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 10:40:46.339263       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:40:46.409259       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:40:46.409315       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:40:46.437289       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2pbws"
	I1101 10:40:46.437319       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xxlgz"
	I1101 10:40:46.826149       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-dbs2d"
	I1101 10:40:46.836719       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-9fdk6"
	I1101 10:40:46.860061       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="560.355797ms"
	I1101 10:40:46.878162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.045641ms"
	I1101 10:40:46.879203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="496.22µs"
	I1101 10:40:47.158281       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 10:40:47.180949       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-dbs2d"
	I1101 10:40:47.194073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.371452ms"
	I1101 10:40:47.201228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.100092ms"
	I1101 10:40:47.201678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.303µs"
	I1101 10:41:00.402754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.539µs"
	I1101 10:41:00.412590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.853µs"
	I1101 10:41:00.815397       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1101 10:41:01.685588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="158.768µs"
	I1101 10:41:01.932232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.713881ms"
	I1101 10:41:01.932349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.751µs"
	
	
	==> kube-proxy [d4738cc7c56770774211fcebe89ee91a16b0fc536b6b3992712ca4817cc560f5] <==
	I1101 10:40:46.933848       1 server_others.go:69] "Using iptables proxy"
	I1101 10:40:46.948593       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1101 10:40:46.984455       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:40:46.988761       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:40:46.989003       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:40:46.989044       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:40:46.989107       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:40:46.989380       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:40:46.989826       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:40:46.991154       1 config.go:188] "Starting service config controller"
	I1101 10:40:46.992365       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:40:46.991670       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:40:46.992441       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:40:46.992338       1 config.go:315] "Starting node config controller"
	I1101 10:40:46.992452       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:40:47.093017       1 shared_informer.go:318] Caches are synced for service config
	I1101 10:40:47.093091       1 shared_informer.go:318] Caches are synced for node config
	I1101 10:40:47.093105       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f7799fe05b6f4b3dae86712c6abe8c515724abaf4b978fe3bb4d64cddabcaf4f] <==
	E1101 10:40:30.074514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 10:40:30.074437       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 10:40:30.074590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 10:40:30.074364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 10:40:30.074610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 10:40:30.074629       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 10:40:30.916234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 10:40:30.916268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 10:40:30.951407       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 10:40:30.951540       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:40:30.962199       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 10:40:30.962239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 10:40:30.995109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 10:40:30.995146       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 10:40:31.003615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 10:40:31.003659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1101 10:40:31.039615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 10:40:31.039747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 10:40:31.039664       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 10:40:31.039860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 10:40:31.125924       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 10:40:31.125963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 10:40:31.245701       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 10:40:31.245758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1101 10:40:34.069105       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:40:45 old-k8s-version-707467 kubelet[1376]: I1101 10:40:45.813072    1376 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:40:45 old-k8s-version-707467 kubelet[1376]: I1101 10:40:45.813925    1376 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.443690    1376 topology_manager.go:215] "Topology Admit Handler" podUID="f553a3e8-f065-4723-8a39-2fee4a395d45" podNamespace="kube-system" podName="kube-proxy-2pbws"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.452925    1376 topology_manager.go:215] "Topology Admit Handler" podUID="cf757ff2-e0ef-43e8-97e9-44b145900bf5" podNamespace="kube-system" podName="kindnet-xxlgz"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.477345    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cf757ff2-e0ef-43e8-97e9-44b145900bf5-cni-cfg\") pod \"kindnet-xxlgz\" (UID: \"cf757ff2-e0ef-43e8-97e9-44b145900bf5\") " pod="kube-system/kindnet-xxlgz"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.477404    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f553a3e8-f065-4723-8a39-2fee4a395d45-lib-modules\") pod \"kube-proxy-2pbws\" (UID: \"f553a3e8-f065-4723-8a39-2fee4a395d45\") " pod="kube-system/kube-proxy-2pbws"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.477438    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drprd\" (UniqueName: \"kubernetes.io/projected/f553a3e8-f065-4723-8a39-2fee4a395d45-kube-api-access-drprd\") pod \"kube-proxy-2pbws\" (UID: \"f553a3e8-f065-4723-8a39-2fee4a395d45\") " pod="kube-system/kube-proxy-2pbws"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.477570    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f553a3e8-f065-4723-8a39-2fee4a395d45-kube-proxy\") pod \"kube-proxy-2pbws\" (UID: \"f553a3e8-f065-4723-8a39-2fee4a395d45\") " pod="kube-system/kube-proxy-2pbws"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.477614    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f553a3e8-f065-4723-8a39-2fee4a395d45-xtables-lock\") pod \"kube-proxy-2pbws\" (UID: \"f553a3e8-f065-4723-8a39-2fee4a395d45\") " pod="kube-system/kube-proxy-2pbws"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.477649    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf757ff2-e0ef-43e8-97e9-44b145900bf5-xtables-lock\") pod \"kindnet-xxlgz\" (UID: \"cf757ff2-e0ef-43e8-97e9-44b145900bf5\") " pod="kube-system/kindnet-xxlgz"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.477675    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf757ff2-e0ef-43e8-97e9-44b145900bf5-lib-modules\") pod \"kindnet-xxlgz\" (UID: \"cf757ff2-e0ef-43e8-97e9-44b145900bf5\") " pod="kube-system/kindnet-xxlgz"
	Nov 01 10:40:46 old-k8s-version-707467 kubelet[1376]: I1101 10:40:46.477709    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vdql\" (UniqueName: \"kubernetes.io/projected/cf757ff2-e0ef-43e8-97e9-44b145900bf5-kube-api-access-6vdql\") pod \"kindnet-xxlgz\" (UID: \"cf757ff2-e0ef-43e8-97e9-44b145900bf5\") " pod="kube-system/kindnet-xxlgz"
	Nov 01 10:40:47 old-k8s-version-707467 kubelet[1376]: I1101 10:40:47.580392    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2pbws" podStartSLOduration=1.580337874 podCreationTimestamp="2025-11-01 10:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:40:47.580098716 +0000 UTC m=+14.199442489" watchObservedRunningTime="2025-11-01 10:40:47.580337874 +0000 UTC m=+14.199681646"
	Nov 01 10:41:00 old-k8s-version-707467 kubelet[1376]: I1101 10:41:00.382047    1376 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 10:41:00 old-k8s-version-707467 kubelet[1376]: I1101 10:41:00.402906    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xxlgz" podStartSLOduration=11.585196705 podCreationTimestamp="2025-11-01 10:40:46 +0000 UTC" firstStartedPulling="2025-11-01 10:40:46.774912617 +0000 UTC m=+13.394256383" lastFinishedPulling="2025-11-01 10:40:49.592557926 +0000 UTC m=+16.211901696" observedRunningTime="2025-11-01 10:40:50.651199869 +0000 UTC m=+17.270543642" watchObservedRunningTime="2025-11-01 10:41:00.402842018 +0000 UTC m=+27.022185805"
	Nov 01 10:41:00 old-k8s-version-707467 kubelet[1376]: I1101 10:41:00.403371    1376 topology_manager.go:215] "Topology Admit Handler" podUID="e43bd16e-e22d-4c91-88ec-652fe391b4f1" podNamespace="kube-system" podName="coredns-5dd5756b68-9fdk6"
	Nov 01 10:41:00 old-k8s-version-707467 kubelet[1376]: I1101 10:41:00.403868    1376 topology_manager.go:215] "Topology Admit Handler" podUID="476c3eb5-e771-4963-ac52-b3786e841080" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 10:41:00 old-k8s-version-707467 kubelet[1376]: I1101 10:41:00.478173    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l78s\" (UniqueName: \"kubernetes.io/projected/476c3eb5-e771-4963-ac52-b3786e841080-kube-api-access-8l78s\") pod \"storage-provisioner\" (UID: \"476c3eb5-e771-4963-ac52-b3786e841080\") " pod="kube-system/storage-provisioner"
	Nov 01 10:41:00 old-k8s-version-707467 kubelet[1376]: I1101 10:41:00.478239    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e43bd16e-e22d-4c91-88ec-652fe391b4f1-config-volume\") pod \"coredns-5dd5756b68-9fdk6\" (UID: \"e43bd16e-e22d-4c91-88ec-652fe391b4f1\") " pod="kube-system/coredns-5dd5756b68-9fdk6"
	Nov 01 10:41:00 old-k8s-version-707467 kubelet[1376]: I1101 10:41:00.478322    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/476c3eb5-e771-4963-ac52-b3786e841080-tmp\") pod \"storage-provisioner\" (UID: \"476c3eb5-e771-4963-ac52-b3786e841080\") " pod="kube-system/storage-provisioner"
	Nov 01 10:41:00 old-k8s-version-707467 kubelet[1376]: I1101 10:41:00.478401    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq6rc\" (UniqueName: \"kubernetes.io/projected/e43bd16e-e22d-4c91-88ec-652fe391b4f1-kube-api-access-hq6rc\") pod \"coredns-5dd5756b68-9fdk6\" (UID: \"e43bd16e-e22d-4c91-88ec-652fe391b4f1\") " pod="kube-system/coredns-5dd5756b68-9fdk6"
	Nov 01 10:41:01 old-k8s-version-707467 kubelet[1376]: I1101 10:41:01.878740    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9fdk6" podStartSLOduration=15.878680465 podCreationTimestamp="2025-11-01 10:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:01.686164462 +0000 UTC m=+28.305508235" watchObservedRunningTime="2025-11-01 10:41:01.878680465 +0000 UTC m=+28.498024240"
	Nov 01 10:41:01 old-k8s-version-707467 kubelet[1376]: I1101 10:41:01.933696    1376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.933636373 podCreationTimestamp="2025-11-01 10:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:01.933348221 +0000 UTC m=+28.552691995" watchObservedRunningTime="2025-11-01 10:41:01.933636373 +0000 UTC m=+28.552980178"
	Nov 01 10:41:03 old-k8s-version-707467 kubelet[1376]: I1101 10:41:03.963447    1376 topology_manager.go:215] "Topology Admit Handler" podUID="19c1aad2-c5a5-4e04-b902-4eb808a4b2de" podNamespace="default" podName="busybox"
	Nov 01 10:41:04 old-k8s-version-707467 kubelet[1376]: I1101 10:41:04.005396    1376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtvkl\" (UniqueName: \"kubernetes.io/projected/19c1aad2-c5a5-4e04-b902-4eb808a4b2de-kube-api-access-dtvkl\") pod \"busybox\" (UID: \"19c1aad2-c5a5-4e04-b902-4eb808a4b2de\") " pod="default/busybox"
	
	
	==> storage-provisioner [18f4ea13773481db96f0340e191661a975b98f2e883a80811a46050d7bef49b1] <==
	I1101 10:41:00.764926       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:41:00.775336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:41:00.775404       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:41:00.788138       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:41:00.788846       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-707467_9564a2f1-a09e-40cb-9fbc-01aa99341eb8!
	I1101 10:41:00.788838       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97b1eecb-ad8d-49bf-af88-e6407fe47b1a", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-707467_9564a2f1-a09e-40cb-9fbc-01aa99341eb8 became leader
	I1101 10:41:00.889980       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-707467_9564a2f1-a09e-40cb-9fbc-01aa99341eb8!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-707467 -n old-k8s-version-707467
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-707467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.599641ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:41:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-753486 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-753486 describe deploy/metrics-server -n kube-system: exit status 1 (62.956193ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-753486 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-753486
helpers_test.go:243: (dbg) docker inspect no-preload-753486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83",
	        "Created": "2025-11-01T10:40:35.467852575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 340796,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:40:35.497830663Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/hostname",
	        "HostsPath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/hosts",
	        "LogPath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83-json.log",
	        "Name": "/no-preload-753486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-753486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-753486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83",
	                "LowerDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-753486",
	                "Source": "/var/lib/docker/volumes/no-preload-753486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-753486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-753486",
	                "name.minikube.sigs.k8s.io": "no-preload-753486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fa26e269ee59bbcc89e921f98e3d43a9b6c87f111cd12b1ed7bafae1c7fca783",
	            "SandboxKey": "/var/run/docker/netns/fa26e269ee59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-753486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:ec:9a:49:4c:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d84c48ff1a5729254be6ec17799a5aeb1a98c07f8517c94be1c2de332505338",
	                    "EndpointID": "a7811abc51b9600599f0a6160f67aaf60904d8da49b26b4288037b58fc94a3f3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-753486",
	                        "6be5ddfae7c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753486 -n no-preload-753486
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-753486 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-753486 logs -n 25: (1.043009396s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-299863 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                 │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat docker --no-pager                                                                                                                                                                                 │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/docker/daemon.json                                                                                                                                                                                     │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo docker system info                                                                                                                                                                                              │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                        │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                  │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cri-dockerd --version                                                                                                                                                                                           │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo crio config                                                                                                                                                                                                     │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p custom-flannel-299863                                                                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p disable-driver-mounts-339061                                                                                                                                                                                                               │ disable-driver-mounts-339061 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-707467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:41:33
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:41:33.635693  359640 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:41:33.635988  359640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:41:33.635999  359640 out.go:374] Setting ErrFile to fd 2...
	I1101 10:41:33.636003  359640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:41:33.636307  359640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:41:33.636845  359640 out.go:368] Setting JSON to false
	I1101 10:41:33.638073  359640 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8634,"bootTime":1761985060,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:41:33.638169  359640 start.go:143] virtualization: kvm guest
	I1101 10:41:33.640208  359640 out.go:179] * [old-k8s-version-707467] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:41:33.641490  359640 notify.go:221] Checking for updates...
	I1101 10:41:33.641525  359640 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:41:33.642904  359640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:41:33.644486  359640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:41:33.645772  359640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:41:33.647108  359640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:41:33.648364  359640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:41:33.649885  359640 config.go:182] Loaded profile config "old-k8s-version-707467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:41:33.651710  359640 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 10:41:33.654325  359640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:41:33.689796  359640 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:41:33.690036  359640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:41:33.762630  359640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:41:33.751738094 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:41:33.762743  359640 docker.go:319] overlay module found
	I1101 10:41:33.765115  359640 out.go:179] * Using the docker driver based on existing profile
	I1101 10:41:33.766306  359640 start.go:309] selected driver: docker
	I1101 10:41:33.766321  359640 start.go:930] validating driver "docker" against &{Name:old-k8s-version-707467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-707467 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:33.766405  359640 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:41:33.767090  359640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:41:33.831750  359640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:41:33.821366524 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:41:33.832083  359640 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:41:33.832121  359640 cni.go:84] Creating CNI manager for ""
	I1101 10:41:33.832173  359640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:41:33.832206  359640 start.go:353] cluster config:
	{Name:old-k8s-version-707467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-707467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:33.833583  359640 out.go:179] * Starting "old-k8s-version-707467" primary control-plane node in "old-k8s-version-707467" cluster
	I1101 10:41:33.834524  359640 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:41:33.835543  359640 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:41:33.836646  359640 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:41:33.836688  359640 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:41:33.836704  359640 cache.go:59] Caching tarball of preloaded images
	I1101 10:41:33.836745  359640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:41:33.836801  359640 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:41:33.836817  359640 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 10:41:33.836953  359640 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/config.json ...
	I1101 10:41:33.858037  359640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:41:33.858057  359640 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:41:33.858071  359640 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:41:33.858096  359640 start.go:360] acquireMachinesLock for old-k8s-version-707467: {Name:mkcdfcfe6269517d12c0be1c248e2bf65e5deaf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:41:33.858167  359640 start.go:364] duration metric: took 38.879µs to acquireMachinesLock for "old-k8s-version-707467"
	I1101 10:41:33.858186  359640 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:41:33.858192  359640 fix.go:54] fixHost starting: 
	I1101 10:41:33.858402  359640 cli_runner.go:164] Run: docker container inspect old-k8s-version-707467 --format={{.State.Status}}
	I1101 10:41:33.876838  359640 fix.go:112] recreateIfNeeded on old-k8s-version-707467: state=Stopped err=<nil>
	W1101 10:41:33.876866  359640 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:41:32.979933  358231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-433711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.608642898s)
	I1101 10:41:32.979974  358231 kic.go:203] duration metric: took 4.608813855s to extract preloaded images to volume ...
	W1101 10:41:32.980059  358231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:41:32.980096  358231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:41:32.980162  358231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:41:33.040897  358231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-433711 --name default-k8s-diff-port-433711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-433711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-433711 --network default-k8s-diff-port-433711 --ip 192.168.76.2 --volume default-k8s-diff-port-433711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:41:33.351532  358231 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Running}}
	I1101 10:41:33.371728  358231 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:41:33.394668  358231 cli_runner.go:164] Run: docker exec default-k8s-diff-port-433711 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:41:33.446678  358231 oci.go:144] the created container "default-k8s-diff-port-433711" has a running status.
	I1101 10:41:33.446735  358231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa...
	I1101 10:41:33.552267  358231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:41:33.580615  358231 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:41:33.606236  358231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:41:33.606261  358231 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-433711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:41:33.651086  358231 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:41:33.680106  358231 machine.go:94] provisionDockerMachine start ...
	I1101 10:41:33.680340  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:33.706792  358231 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:33.707202  358231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1101 10:41:33.707223  358231 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:41:33.708078  358231 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:41:36.849365  358231 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-433711
	
	I1101 10:41:36.849391  358231 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-433711"
	I1101 10:41:36.849447  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:36.869590  358231 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:36.869807  358231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1101 10:41:36.869821  358231 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-433711 && echo "default-k8s-diff-port-433711" | sudo tee /etc/hostname
	I1101 10:41:37.025673  358231 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-433711
	
	I1101 10:41:37.025777  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:37.045205  358231 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:37.045442  358231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1101 10:41:37.045469  358231 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-433711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-433711/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-433711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:41:37.185377  358231 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:41:37.185411  358231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:41:37.185436  358231 ubuntu.go:190] setting up certificates
	I1101 10:41:37.185449  358231 provision.go:84] configureAuth start
	I1101 10:41:37.185518  358231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-433711
	I1101 10:41:37.205398  358231 provision.go:143] copyHostCerts
	I1101 10:41:37.205470  358231 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:41:37.205486  358231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:41:37.205603  358231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:41:37.205741  358231 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:41:37.205753  358231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:41:37.205797  358231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:41:37.205897  358231 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:41:37.205907  358231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:41:37.205945  358231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:41:37.206073  358231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-433711 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-433711 localhost minikube]
	I1101 10:41:37.553318  358231 provision.go:177] copyRemoteCerts
	I1101 10:41:37.553366  358231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:41:37.553398  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:37.573025  358231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa Username:docker}
	W1101 10:41:32.911467  349346 node_ready.go:57] node "embed-certs-071527" has "Ready":"False" status (will retry)
	W1101 10:41:35.411463  349346 node_ready.go:57] node "embed-certs-071527" has "Ready":"False" status (will retry)
	W1101 10:41:37.411719  349346 node_ready.go:57] node "embed-certs-071527" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:41:25 no-preload-753486 crio[776]: time="2025-11-01T10:41:25.315923018Z" level=info msg="Starting container: 6dde531f0b56d17a4287a15808398d939705053075e3aa2b3b0597a3f70a7bb3" id=1756f1ba-83b2-4e20-9415-48f2a17fa78c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:41:25 no-preload-753486 crio[776]: time="2025-11-01T10:41:25.317876676Z" level=info msg="Started container" PID=2896 containerID=6dde531f0b56d17a4287a15808398d939705053075e3aa2b3b0597a3f70a7bb3 description=kube-system/coredns-66bc5c9577-6zph7/coredns id=1756f1ba-83b2-4e20-9415-48f2a17fa78c name=/runtime.v1.RuntimeService/StartContainer sandboxID=05830a6f74a1d8f8d09e567a3a17cadcfcc62bb59cb2178f8975ae103012d6a1
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.156236122Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f8481071-25b1-47ff-be34-fabd903cac62 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.156376146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.163998275Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:caca4294e94b1cb32e47fe185d190bb48ef71ad9a2726b1198e60b3f89952976 UID:9dd3f019-b2ff-48ef-871e-baed334b2205 NetNS:/var/run/netns/9db0d741-34c4-4f87-9b49-7d27a3a9fd1e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009d80e8}] Aliases:map[]}"
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.164030641Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.174931992Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:caca4294e94b1cb32e47fe185d190bb48ef71ad9a2726b1198e60b3f89952976 UID:9dd3f019-b2ff-48ef-871e-baed334b2205 NetNS:/var/run/netns/9db0d741-34c4-4f87-9b49-7d27a3a9fd1e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009d80e8}] Aliases:map[]}"
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.175078556Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.176018139Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.177142537Z" level=info msg="Ran pod sandbox caca4294e94b1cb32e47fe185d190bb48ef71ad9a2726b1198e60b3f89952976 with infra container: default/busybox/POD" id=f8481071-25b1-47ff-be34-fabd903cac62 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.1782356Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=092bad91-7efc-45c6-b993-5a993e524cf5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.178352031Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=092bad91-7efc-45c6-b993-5a993e524cf5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.178384087Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=092bad91-7efc-45c6-b993-5a993e524cf5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.178866877Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f4402467-5622-4971-9d0f-3234cd4e27ea name=/runtime.v1.ImageService/PullImage
	Nov 01 10:41:28 no-preload-753486 crio[776]: time="2025-11-01T10:41:28.18252184Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.684791853Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f4402467-5622-4971-9d0f-3234cd4e27ea name=/runtime.v1.ImageService/PullImage
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.68539133Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3d547f11-6435-47ff-8e7a-11551125354b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.686719445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ea13c43-30a2-47a9-bbc8-83fb4766481f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.689784755Z" level=info msg="Creating container: default/busybox/busybox" id=529f2d8a-5146-4176-bb41-78a9f801b0c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.689930999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.694917939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.695490725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.723357023Z" level=info msg="Created container bd2292f1de3242c7745f9487a5bd4d43a4926a59c54233f04b3ada7c09ebf9d3: default/busybox/busybox" id=529f2d8a-5146-4176-bb41-78a9f801b0c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.723970494Z" level=info msg="Starting container: bd2292f1de3242c7745f9487a5bd4d43a4926a59c54233f04b3ada7c09ebf9d3" id=d8b12901-3bbd-45eb-9f39-6d0297880033 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:41:30 no-preload-753486 crio[776]: time="2025-11-01T10:41:30.725893607Z" level=info msg="Started container" PID=2971 containerID=bd2292f1de3242c7745f9487a5bd4d43a4926a59c54233f04b3ada7c09ebf9d3 description=default/busybox/busybox id=d8b12901-3bbd-45eb-9f39-6d0297880033 name=/runtime.v1.RuntimeService/StartContainer sandboxID=caca4294e94b1cb32e47fe185d190bb48ef71ad9a2726b1198e60b3f89952976
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bd2292f1de324       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   caca4294e94b1       busybox                                     default
	6dde531f0b56d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   05830a6f74a1d       coredns-66bc5c9577-6zph7                    kube-system
	eeb39b7ea1f34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   745f48d6e6812       storage-provisioner                         kube-system
	1d10f0c19b845       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   c33a9e6732892       kindnet-dlvlr                               kube-system
	4ca9521122930       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      26 seconds ago      Running             kube-proxy                0                   65cb82c29e976       kube-proxy-d5hv4                            kube-system
	b83a3699f8971       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      37 seconds ago      Running             kube-controller-manager   0                   abdddcea3e702       kube-controller-manager-no-preload-753486   kube-system
	5fa58663fe3d7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      37 seconds ago      Running             kube-scheduler            0                   70acda6c1b3c5       kube-scheduler-no-preload-753486            kube-system
	0a8362219b628       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      37 seconds ago      Running             kube-apiserver            0                   a8ee8d82dd352       kube-apiserver-no-preload-753486            kube-system
	31f90bda8839e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      37 seconds ago      Running             etcd                      0                   e2969726c6fec       etcd-no-preload-753486                      kube-system
	
	
	==> coredns [6dde531f0b56d17a4287a15808398d939705053075e3aa2b3b0597a3f70a7bb3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58251 - 47389 "HINFO IN 6404070110017811415.6905401946403323776. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.139839658s
	
	
	==> describe nodes <==
	Name:               no-preload-753486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-753486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=no-preload-753486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-753486
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:41:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:41:36 +0000   Sat, 01 Nov 2025 10:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:41:36 +0000   Sat, 01 Nov 2025 10:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:41:36 +0000   Sat, 01 Nov 2025 10:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:41:36 +0000   Sat, 01 Nov 2025 10:41:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-753486
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                cc437131-bcc8-4de4-a901-e5bef9dd6b70
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-6zph7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-753486                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-dlvlr                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-753486             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-753486    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-d5hv4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-753486             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node no-preload-753486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node no-preload-753486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node no-preload-753486 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node no-preload-753486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node no-preload-753486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node no-preload-753486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node no-preload-753486 event: Registered Node no-preload-753486 in Controller
	  Normal  NodeReady                14s                kubelet          Node no-preload-753486 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[Nov 1 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	
	
	==> etcd [31f90bda8839e9e36f7e7f94620605d0ac17db21b229fbe682aed253d0eee397] <==
	{"level":"warn","ts":"2025-11-01T10:41:06.903960Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.669368ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-01T10:41:06.903987Z","caller":"traceutil/trace.go:172","msg":"trace[1150625550] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:297; }","duration":"136.713489ms","start":"2025-11-01T10:41:06.767264Z","end":"2025-11-01T10:41:06.903977Z","steps":["trace[1150625550] 'agreement among raft nodes before linearized reading'  (duration: 136.592474ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:06.948855Z","caller":"traceutil/trace.go:172","msg":"trace[1757257211] transaction","detail":"{read_only:false; number_of_response:0; response_revision:297; }","duration":"252.31873ms","start":"2025-11-01T10:41:06.696522Z","end":"2025-11-01T10:41:06.948841Z","steps":["trace[1757257211] 'process raft request'  (duration: 252.107803ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:06.948883Z","caller":"traceutil/trace.go:172","msg":"trace[1538802605] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"148.666942ms","start":"2025-11-01T10:41:06.800198Z","end":"2025-11-01T10:41:06.948865Z","steps":["trace[1538802605] 'process raft request'  (duration: 148.583028ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:06.948929Z","caller":"traceutil/trace.go:172","msg":"trace[479403632] transaction","detail":"{read_only:false; number_of_response:0; response_revision:297; }","duration":"252.393578ms","start":"2025-11-01T10:41:06.696530Z","end":"2025-11-01T10:41:06.948924Z","steps":["trace[479403632] 'process raft request'  (duration: 252.226368ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:41:07.289974Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.628532ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596765963342274 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/expand-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/expand-controller\" value_size:122 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:41:07.290301Z","caller":"traceutil/trace.go:172","msg":"trace[285989447] transaction","detail":"{read_only:false; response_revision:303; number_of_response:1; }","duration":"232.380746ms","start":"2025-11-01T10:41:07.057906Z","end":"2025-11-01T10:41:07.290287Z","steps":["trace[285989447] 'process raft request'  (duration: 232.318245ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:07.290344Z","caller":"traceutil/trace.go:172","msg":"trace[1116644442] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"236.524986ms","start":"2025-11-01T10:41:07.053813Z","end":"2025-11-01T10:41:07.290338Z","steps":["trace[1116644442] 'process raft request'  (duration: 236.339069ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:07.290292Z","caller":"traceutil/trace.go:172","msg":"trace[710670055] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"236.671062ms","start":"2025-11-01T10:41:07.053577Z","end":"2025-11-01T10:41:07.290248Z","steps":["trace[710670055] 'process raft request'  (duration: 118.275621ms)","trace[710670055] 'compare'  (duration: 117.488942ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:41:07.470954Z","caller":"traceutil/trace.go:172","msg":"trace[1022498304] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"171.998712ms","start":"2025-11-01T10:41:07.298938Z","end":"2025-11-01T10:41:07.470937Z","steps":["trace[1022498304] 'process raft request'  (duration: 171.944606ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:07.470987Z","caller":"traceutil/trace.go:172","msg":"trace[1301189069] transaction","detail":"{read_only:false; response_revision:305; number_of_response:1; }","duration":"173.228996ms","start":"2025-11-01T10:41:07.297730Z","end":"2025-11-01T10:41:07.470959Z","steps":["trace[1301189069] 'process raft request'  (duration: 173.103148ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:07.470963Z","caller":"traceutil/trace.go:172","msg":"trace[1241623849] transaction","detail":"{read_only:false; response_revision:304; number_of_response:1; }","duration":"175.108994ms","start":"2025-11-01T10:41:07.295826Z","end":"2025-11-01T10:41:07.470935Z","steps":["trace[1241623849] 'process raft request'  (duration: 123.968485ms)","trace[1241623849] 'compare'  (duration: 50.898888ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:41:07.778163Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"278.367591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:41:07.778236Z","caller":"traceutil/trace.go:172","msg":"trace[813102499] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/validatingadmissionpolicy-status-controller; range_end:; response_count:0; response_revision:307; }","duration":"278.456387ms","start":"2025-11-01T10:41:07.499762Z","end":"2025-11-01T10:41:07.778218Z","steps":["trace[813102499] 'agreement among raft nodes before linearized reading'  (duration: 71.949996ms)","trace[813102499] 'range keys from in-memory index tree'  (duration: 206.387196ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:41:07.778321Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"206.516075ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596765963342289 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kindnet\" value_size:452 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:41:07.778598Z","caller":"traceutil/trace.go:172","msg":"trace[996220049] transaction","detail":"{read_only:false; response_revision:309; number_of_response:1; }","duration":"297.984107ms","start":"2025-11-01T10:41:07.480602Z","end":"2025-11-01T10:41:07.778586Z","steps":["trace[996220049] 'process raft request'  (duration: 297.871023ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:41:07.778639Z","caller":"traceutil/trace.go:172","msg":"trace[477215284] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"300.276003ms","start":"2025-11-01T10:41:07.478334Z","end":"2025-11-01T10:41:07.778610Z","steps":["trace[477215284] 'process raft request'  (duration: 93.420422ms)","trace[477215284] 'compare'  (duration: 206.408828ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:41:07.778734Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:41:07.478317Z","time spent":"300.372751ms","remote":"127.0.0.1:38066","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":505,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kindnet\" value_size:452 >> failure:<>"}
	{"level":"info","ts":"2025-11-01T10:41:08.049111Z","caller":"traceutil/trace.go:172","msg":"trace[319565675] linearizableReadLoop","detail":"{readStateIndex:323; appliedIndex:323; }","duration":"142.729349ms","start":"2025-11-01T10:41:07.906357Z","end":"2025-11-01T10:41:08.049086Z","steps":["trace[319565675] 'read index received'  (duration: 142.717112ms)","trace[319565675] 'applied index is now lower than readState.Index'  (duration: 10.509µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:41:08.122762Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"216.382422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-11-01T10:41:08.123080Z","caller":"traceutil/trace.go:172","msg":"trace[693352172] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:313; }","duration":"216.707505ms","start":"2025-11-01T10:41:07.906350Z","end":"2025-11-01T10:41:08.123057Z","steps":["trace[693352172] 'agreement among raft nodes before linearized reading'  (duration: 142.788434ms)","trace[693352172] 'range keys from in-memory index tree'  (duration: 73.49991ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:41:08.123156Z","caller":"traceutil/trace.go:172","msg":"trace[835672126] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"235.398553ms","start":"2025-11-01T10:41:07.887740Z","end":"2025-11-01T10:41:08.123139Z","steps":["trace[835672126] 'process raft request'  (duration: 161.398229ms)","trace[835672126] 'compare'  (duration: 73.649476ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:41:08.123185Z","caller":"traceutil/trace.go:172","msg":"trace[1308712644] transaction","detail":"{read_only:false; response_revision:315; number_of_response:1; }","duration":"173.084664ms","start":"2025-11-01T10:41:07.950086Z","end":"2025-11-01T10:41:08.123171Z","steps":["trace[1308712644] 'process raft request'  (duration: 172.854986ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:41:31.691751Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"231.655731ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:41:31.691830Z","caller":"traceutil/trace.go:172","msg":"trace[1468859421] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:467; }","duration":"231.759298ms","start":"2025-11-01T10:41:31.460058Z","end":"2025-11-01T10:41:31.691817Z","steps":["trace[1468859421] 'range keys from in-memory index tree'  (duration: 231.586796ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:41:38 up  2:23,  0 user,  load average: 4.75, 3.74, 2.38
	Linux no-preload-753486 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1d10f0c19b8450db9c91194a0d6d842cb76e016e881c65f4fae511709e4e151a] <==
	I1101 10:41:14.569165       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:41:14.569415       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:41:14.569567       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:41:14.569586       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:41:14.569599       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:41:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:41:14.867968       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:41:14.868005       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:41:14.868383       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:41:14.868444       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:41:15.168770       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:41:15.168797       1 metrics.go:72] Registering metrics
	I1101 10:41:15.168867       1 controller.go:711] "Syncing nftables rules"
	I1101 10:41:24.772686       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:41:24.772739       1 main.go:301] handling current node
	I1101 10:41:34.773769       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:41:34.773837       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a8362219b628e495946629d4790224c344f9875f38924c0048f299c598eaa58] <==
	E1101 10:41:03.401479       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1101 10:41:03.449272       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:41:03.464458       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:03.464973       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:41:03.472126       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:03.472811       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:41:03.544743       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:41:04.251689       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:41:04.255736       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:41:04.255754       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:41:04.798037       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:41:04.843349       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:41:04.956785       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:41:04.965893       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 10:41:04.967163       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:41:04.972835       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:41:05.296113       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:41:05.792351       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:41:05.808008       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:41:05.816969       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:41:10.899675       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:10.903459       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:11.298636       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:41:11.400076       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1101 10:41:36.941883       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:44734: use of closed network connection
	
	
	==> kube-controller-manager [b83a3699f897138ca3b053c98fa87ef910e0222c08c5f31aa1c930949e56e6f7] <==
	I1101 10:41:10.295430       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:41:10.295438       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:41:10.295662       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:41:10.295797       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:41:10.296049       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:41:10.296820       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:41:10.297951       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:41:10.299274       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:41:10.299313       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:41:10.299738       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:41:10.299815       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:41:10.299866       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:41:10.299876       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:41:10.299884       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:41:10.300629       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:41:10.305046       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:41:10.305080       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:41:10.306215       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:41:10.306637       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-753486" podCIDRs=["10.244.0.0/24"]
	I1101 10:41:10.313565       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:41:10.314732       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:41:10.314743       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:41:10.319877       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:41:10.321714       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:41:25.258849       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4ca95211229306cb39f126b5e7059f76ff630c7bfbec7b2b06835000b5b1c185] <==
	I1101 10:41:11.761851       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:41:11.869838       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:41:11.971538       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:41:11.971586       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:41:11.971708       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:41:11.996541       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:41:11.996623       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:41:12.005196       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:41:12.005656       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:41:12.005680       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:41:12.006964       1 config.go:200] "Starting service config controller"
	I1101 10:41:12.008822       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:41:12.007074       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:41:12.012704       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:41:12.012769       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:41:12.007981       1 config.go:309] "Starting node config controller"
	I1101 10:41:12.013487       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:41:12.013535       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:41:12.007093       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:41:12.013585       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:41:12.109673       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:41:12.114463       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5fa58663fe3d707f3be66c19f90a318c704d6a48288d2c5cf92f983b6e5e712d] <==
	E1101 10:41:03.305695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:41:03.305698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:41:03.305729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:41:03.305758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:41:03.305770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:41:03.305809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:41:03.305829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:41:03.305926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:41:03.305955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:41:04.111833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:41:04.194430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:41:04.273040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:41:04.281307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:41:04.368449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:41:04.400739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:41:04.407237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:41:04.416691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:41:04.455940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:41:04.515525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:41:04.538847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:41:04.539061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:41:04.561663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:41:04.579877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:41:04.598338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1101 10:41:06.801447       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:41:07 no-preload-753486 kubelet[2304]: I1101 10:41:07.051999    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-753486" podStartSLOduration=2.051977053 podStartE2EDuration="2.051977053s" podCreationTimestamp="2025-11-01 10:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:06.952273086 +0000 UTC m=+1.377008326" watchObservedRunningTime="2025-11-01 10:41:07.051977053 +0000 UTC m=+1.476712290"
	Nov 01 10:41:07 no-preload-753486 kubelet[2304]: I1101 10:41:07.292711    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-753486" podStartSLOduration=2.292693612 podStartE2EDuration="2.292693612s" podCreationTimestamp="2025-11-01 10:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:07.291944098 +0000 UTC m=+1.716679380" watchObservedRunningTime="2025-11-01 10:41:07.292693612 +0000 UTC m=+1.717428850"
	Nov 01 10:41:07 no-preload-753486 kubelet[2304]: I1101 10:41:07.292823    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-753486" podStartSLOduration=2.2928184910000002 podStartE2EDuration="2.292818491s" podCreationTimestamp="2025-11-01 10:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:07.052203855 +0000 UTC m=+1.476939090" watchObservedRunningTime="2025-11-01 10:41:07.292818491 +0000 UTC m=+1.717553710"
	Nov 01 10:41:07 no-preload-753486 kubelet[2304]: I1101 10:41:07.472596    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-753486" podStartSLOduration=2.472574361 podStartE2EDuration="2.472574361s" podCreationTimestamp="2025-11-01 10:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:07.472563052 +0000 UTC m=+1.897298291" watchObservedRunningTime="2025-11-01 10:41:07.472574361 +0000 UTC m=+1.897309600"
	Nov 01 10:41:10 no-preload-753486 kubelet[2304]: I1101 10:41:10.408076    2304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:41:10 no-preload-753486 kubelet[2304]: I1101 10:41:10.408792    2304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:41:11 no-preload-753486 kubelet[2304]: I1101 10:41:11.398564    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c89ce298-4dde-42bb-a961-7db311962eb9-lib-modules\") pod \"kindnet-dlvlr\" (UID: \"c89ce298-4dde-42bb-a961-7db311962eb9\") " pod="kube-system/kindnet-dlvlr"
	Nov 01 10:41:11 no-preload-753486 kubelet[2304]: I1101 10:41:11.398646    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8f7e30a-9d92-4d9f-936c-6770bef1fe6e-xtables-lock\") pod \"kube-proxy-d5hv4\" (UID: \"f8f7e30a-9d92-4d9f-936c-6770bef1fe6e\") " pod="kube-system/kube-proxy-d5hv4"
	Nov 01 10:41:11 no-preload-753486 kubelet[2304]: I1101 10:41:11.398702    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8f7e30a-9d92-4d9f-936c-6770bef1fe6e-lib-modules\") pod \"kube-proxy-d5hv4\" (UID: \"f8f7e30a-9d92-4d9f-936c-6770bef1fe6e\") " pod="kube-system/kube-proxy-d5hv4"
	Nov 01 10:41:11 no-preload-753486 kubelet[2304]: I1101 10:41:11.398751    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c89ce298-4dde-42bb-a961-7db311962eb9-cni-cfg\") pod \"kindnet-dlvlr\" (UID: \"c89ce298-4dde-42bb-a961-7db311962eb9\") " pod="kube-system/kindnet-dlvlr"
	Nov 01 10:41:11 no-preload-753486 kubelet[2304]: I1101 10:41:11.398773    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbj4\" (UniqueName: \"kubernetes.io/projected/c89ce298-4dde-42bb-a961-7db311962eb9-kube-api-access-cdbj4\") pod \"kindnet-dlvlr\" (UID: \"c89ce298-4dde-42bb-a961-7db311962eb9\") " pod="kube-system/kindnet-dlvlr"
	Nov 01 10:41:11 no-preload-753486 kubelet[2304]: I1101 10:41:11.398799    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c89ce298-4dde-42bb-a961-7db311962eb9-xtables-lock\") pod \"kindnet-dlvlr\" (UID: \"c89ce298-4dde-42bb-a961-7db311962eb9\") " pod="kube-system/kindnet-dlvlr"
	Nov 01 10:41:11 no-preload-753486 kubelet[2304]: I1101 10:41:11.398820    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8f7e30a-9d92-4d9f-936c-6770bef1fe6e-kube-proxy\") pod \"kube-proxy-d5hv4\" (UID: \"f8f7e30a-9d92-4d9f-936c-6770bef1fe6e\") " pod="kube-system/kube-proxy-d5hv4"
	Nov 01 10:41:11 no-preload-753486 kubelet[2304]: I1101 10:41:11.398843    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcm77\" (UniqueName: \"kubernetes.io/projected/f8f7e30a-9d92-4d9f-936c-6770bef1fe6e-kube-api-access-lcm77\") pod \"kube-proxy-d5hv4\" (UID: \"f8f7e30a-9d92-4d9f-936c-6770bef1fe6e\") " pod="kube-system/kube-proxy-d5hv4"
	Nov 01 10:41:13 no-preload-753486 kubelet[2304]: I1101 10:41:13.312154    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d5hv4" podStartSLOduration=2.312128938 podStartE2EDuration="2.312128938s" podCreationTimestamp="2025-11-01 10:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:11.726912064 +0000 UTC m=+6.151647322" watchObservedRunningTime="2025-11-01 10:41:13.312128938 +0000 UTC m=+7.736864176"
	Nov 01 10:41:17 no-preload-753486 kubelet[2304]: I1101 10:41:17.039123    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dlvlr" podStartSLOduration=3.417199982 podStartE2EDuration="6.039099431s" podCreationTimestamp="2025-11-01 10:41:11 +0000 UTC" firstStartedPulling="2025-11-01 10:41:11.631768913 +0000 UTC m=+6.056504142" lastFinishedPulling="2025-11-01 10:41:14.253668373 +0000 UTC m=+8.678403591" observedRunningTime="2025-11-01 10:41:14.736674123 +0000 UTC m=+9.161409364" watchObservedRunningTime="2025-11-01 10:41:17.039099431 +0000 UTC m=+11.463834670"
	Nov 01 10:41:24 no-preload-753486 kubelet[2304]: I1101 10:41:24.937528    2304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:41:25 no-preload-753486 kubelet[2304]: I1101 10:41:25.016534    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df6b9a0f-df6b-4830-ad22-495137f60f10-config-volume\") pod \"coredns-66bc5c9577-6zph7\" (UID: \"df6b9a0f-df6b-4830-ad22-495137f60f10\") " pod="kube-system/coredns-66bc5c9577-6zph7"
	Nov 01 10:41:25 no-preload-753486 kubelet[2304]: I1101 10:41:25.016579    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gdc9\" (UniqueName: \"kubernetes.io/projected/7db59704-c595-4e63-b686-c9cfaa53266d-kube-api-access-7gdc9\") pod \"storage-provisioner\" (UID: \"7db59704-c595-4e63-b686-c9cfaa53266d\") " pod="kube-system/storage-provisioner"
	Nov 01 10:41:25 no-preload-753486 kubelet[2304]: I1101 10:41:25.016613    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtv22\" (UniqueName: \"kubernetes.io/projected/df6b9a0f-df6b-4830-ad22-495137f60f10-kube-api-access-xtv22\") pod \"coredns-66bc5c9577-6zph7\" (UID: \"df6b9a0f-df6b-4830-ad22-495137f60f10\") " pod="kube-system/coredns-66bc5c9577-6zph7"
	Nov 01 10:41:25 no-preload-753486 kubelet[2304]: I1101 10:41:25.016662    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7db59704-c595-4e63-b686-c9cfaa53266d-tmp\") pod \"storage-provisioner\" (UID: \"7db59704-c595-4e63-b686-c9cfaa53266d\") " pod="kube-system/storage-provisioner"
	Nov 01 10:41:25 no-preload-753486 kubelet[2304]: I1101 10:41:25.761682    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.761663624 podStartE2EDuration="13.761663624s" podCreationTimestamp="2025-11-01 10:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:25.76165161 +0000 UTC m=+20.186386848" watchObservedRunningTime="2025-11-01 10:41:25.761663624 +0000 UTC m=+20.186398861"
	Nov 01 10:41:25 no-preload-753486 kubelet[2304]: I1101 10:41:25.770841    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6zph7" podStartSLOduration=14.770820043 podStartE2EDuration="14.770820043s" podCreationTimestamp="2025-11-01 10:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:25.770620537 +0000 UTC m=+20.195355827" watchObservedRunningTime="2025-11-01 10:41:25.770820043 +0000 UTC m=+20.195555293"
	Nov 01 10:41:27 no-preload-753486 kubelet[2304]: I1101 10:41:27.932718    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh48p\" (UniqueName: \"kubernetes.io/projected/9dd3f019-b2ff-48ef-871e-baed334b2205-kube-api-access-zh48p\") pod \"busybox\" (UID: \"9dd3f019-b2ff-48ef-871e-baed334b2205\") " pod="default/busybox"
	Nov 01 10:41:36 no-preload-753486 kubelet[2304]: E1101 10:41:36.941824    2304 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39206->127.0.0.1:39553: write tcp 127.0.0.1:39206->127.0.0.1:39553: write: broken pipe
	
	
	==> storage-provisioner [eeb39b7ea1f344c13954ae8366d1e06b6df3c5acb130e8132a3338064c4489a8] <==
	I1101 10:41:25.330695       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:41:25.339274       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:41:25.339323       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:41:25.341922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:25.346636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:41:25.346840       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:41:25.346904       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"42e98089-1a88-4a73-8966-d74c8995409f", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-753486_900ae7f6-cdd0-4644-834c-8054545d703b became leader
	I1101 10:41:25.347055       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-753486_900ae7f6-cdd0-4644-834c-8054545d703b!
	W1101 10:41:25.348989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:25.352339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:41:25.447231       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-753486_900ae7f6-cdd0-4644-834c-8054545d703b!
	W1101 10:41:27.355603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:27.360207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:29.363571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:29.367605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:31.371103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:31.396975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:33.401079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:33.406001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:35.409822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:35.413984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:37.417371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:37.421527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753486 -n no-preload-753486
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-753486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.167254ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:41:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-071527 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-071527 describe deploy/metrics-server -n kube-system: exit status 1 (57.288386ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-071527 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-071527
helpers_test.go:243: (dbg) docker inspect embed-certs-071527:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347",
	        "Created": "2025-11-01T10:41:08.275582129Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350201,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:41:08.314154109Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/hostname",
	        "HostsPath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/hosts",
	        "LogPath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347-json.log",
	        "Name": "/embed-certs-071527",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-071527:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-071527",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347",
	                "LowerDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-071527",
	                "Source": "/var/lib/docker/volumes/embed-certs-071527/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-071527",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-071527",
	                "name.minikube.sigs.k8s.io": "embed-certs-071527",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1ee4521b2a471551420fb027150c85112098c8c4a34560e893df7e57e072b685",
	            "SandboxKey": "/var/run/docker/netns/1ee4521b2a47",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-071527": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:8c:f0:4f:47:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71c4a921c7722fad5b37063fce8060e68553067a07b69ccdd6ced39559bcf13c",
	                    "EndpointID": "7b0144547b9c79aed4ab0c975fda82eef160bae62b8e3563f01a8b116f80e5e3",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-071527",
	                        "e344e6e53c87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071527 -n embed-certs-071527
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-071527 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat docker --no-pager                                                                                                                                                                                 │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/docker/daemon.json                                                                                                                                                                                     │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo docker system info                                                                                                                                                                                              │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                        │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                  │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cri-dockerd --version                                                                                                                                                                                           │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo crio config                                                                                                                                                                                                     │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p custom-flannel-299863                                                                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p disable-driver-mounts-339061                                                                                                                                                                                                               │ disable-driver-mounts-339061 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-707467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p no-preload-753486 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:41:33
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:41:33.635693  359640 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:41:33.635988  359640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:41:33.635999  359640 out.go:374] Setting ErrFile to fd 2...
	I1101 10:41:33.636003  359640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:41:33.636307  359640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:41:33.636845  359640 out.go:368] Setting JSON to false
	I1101 10:41:33.638073  359640 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8634,"bootTime":1761985060,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:41:33.638169  359640 start.go:143] virtualization: kvm guest
	I1101 10:41:33.640208  359640 out.go:179] * [old-k8s-version-707467] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:41:33.641490  359640 notify.go:221] Checking for updates...
	I1101 10:41:33.641525  359640 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:41:33.642904  359640 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:41:33.644486  359640 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:41:33.645772  359640 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:41:33.647108  359640 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:41:33.648364  359640 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:41:33.649885  359640 config.go:182] Loaded profile config "old-k8s-version-707467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:41:33.651710  359640 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 10:41:33.654325  359640 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:41:33.689796  359640 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:41:33.690036  359640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:41:33.762630  359640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:41:33.751738094 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:41:33.762743  359640 docker.go:319] overlay module found
	I1101 10:41:33.765115  359640 out.go:179] * Using the docker driver based on existing profile
	I1101 10:41:33.766306  359640 start.go:309] selected driver: docker
	I1101 10:41:33.766321  359640 start.go:930] validating driver "docker" against &{Name:old-k8s-version-707467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-707467 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:33.766405  359640 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:41:33.767090  359640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:41:33.831750  359640 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:41:33.821366524 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:41:33.832083  359640 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:41:33.832121  359640 cni.go:84] Creating CNI manager for ""
	I1101 10:41:33.832173  359640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:41:33.832206  359640 start.go:353] cluster config:
	{Name:old-k8s-version-707467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-707467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:33.833583  359640 out.go:179] * Starting "old-k8s-version-707467" primary control-plane node in "old-k8s-version-707467" cluster
	I1101 10:41:33.834524  359640 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:41:33.835543  359640 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:41:33.836646  359640 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:41:33.836688  359640 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:41:33.836704  359640 cache.go:59] Caching tarball of preloaded images
	I1101 10:41:33.836745  359640 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:41:33.836801  359640 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:41:33.836817  359640 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 10:41:33.836953  359640 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/config.json ...
	I1101 10:41:33.858037  359640 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:41:33.858057  359640 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:41:33.858071  359640 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:41:33.858096  359640 start.go:360] acquireMachinesLock for old-k8s-version-707467: {Name:mkcdfcfe6269517d12c0be1c248e2bf65e5deaf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:41:33.858167  359640 start.go:364] duration metric: took 38.879µs to acquireMachinesLock for "old-k8s-version-707467"
	I1101 10:41:33.858186  359640 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:41:33.858192  359640 fix.go:54] fixHost starting: 
	I1101 10:41:33.858402  359640 cli_runner.go:164] Run: docker container inspect old-k8s-version-707467 --format={{.State.Status}}
	I1101 10:41:33.876838  359640 fix.go:112] recreateIfNeeded on old-k8s-version-707467: state=Stopped err=<nil>
	W1101 10:41:33.876866  359640 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:41:32.979933  358231 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-433711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.608642898s)
	I1101 10:41:32.979974  358231 kic.go:203] duration metric: took 4.608813855s to extract preloaded images to volume ...
	W1101 10:41:32.980059  358231 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 10:41:32.980096  358231 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 10:41:32.980162  358231 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 10:41:33.040897  358231 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-433711 --name default-k8s-diff-port-433711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-433711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-433711 --network default-k8s-diff-port-433711 --ip 192.168.76.2 --volume default-k8s-diff-port-433711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 10:41:33.351532  358231 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Running}}
	I1101 10:41:33.371728  358231 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:41:33.394668  358231 cli_runner.go:164] Run: docker exec default-k8s-diff-port-433711 stat /var/lib/dpkg/alternatives/iptables
	I1101 10:41:33.446678  358231 oci.go:144] the created container "default-k8s-diff-port-433711" has a running status.
	I1101 10:41:33.446735  358231 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa...
	I1101 10:41:33.552267  358231 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 10:41:33.580615  358231 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:41:33.606236  358231 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 10:41:33.606261  358231 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-433711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 10:41:33.651086  358231 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:41:33.680106  358231 machine.go:94] provisionDockerMachine start ...
	I1101 10:41:33.680340  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:33.706792  358231 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:33.707202  358231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1101 10:41:33.707223  358231 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:41:33.708078  358231 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1101 10:41:36.849365  358231 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-433711
	
	I1101 10:41:36.849391  358231 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-433711"
	I1101 10:41:36.849447  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:36.869590  358231 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:36.869807  358231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1101 10:41:36.869821  358231 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-433711 && echo "default-k8s-diff-port-433711" | sudo tee /etc/hostname
	I1101 10:41:37.025673  358231 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-433711
	
	I1101 10:41:37.025777  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:37.045205  358231 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:37.045442  358231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1101 10:41:37.045469  358231 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-433711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-433711/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-433711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:41:37.185377  358231 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:41:37.185411  358231 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:41:37.185436  358231 ubuntu.go:190] setting up certificates
	I1101 10:41:37.185449  358231 provision.go:84] configureAuth start
	I1101 10:41:37.185518  358231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-433711
	I1101 10:41:37.205398  358231 provision.go:143] copyHostCerts
	I1101 10:41:37.205470  358231 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:41:37.205486  358231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:41:37.205603  358231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:41:37.205741  358231 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:41:37.205753  358231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:41:37.205797  358231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:41:37.205897  358231 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:41:37.205907  358231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:41:37.205945  358231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:41:37.206073  358231 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-433711 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-433711 localhost minikube]
	I1101 10:41:37.553318  358231 provision.go:177] copyRemoteCerts
	I1101 10:41:37.553366  358231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:41:37.553398  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:37.573025  358231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa Username:docker}
	W1101 10:41:32.911467  349346 node_ready.go:57] node "embed-certs-071527" has "Ready":"False" status (will retry)
	W1101 10:41:35.411463  349346 node_ready.go:57] node "embed-certs-071527" has "Ready":"False" status (will retry)
	W1101 10:41:37.411719  349346 node_ready.go:57] node "embed-certs-071527" has "Ready":"False" status (will retry)
	I1101 10:41:33.878682  359640 out.go:252] * Restarting existing docker container for "old-k8s-version-707467" ...
	I1101 10:41:33.878780  359640 cli_runner.go:164] Run: docker start old-k8s-version-707467
	I1101 10:41:34.125698  359640 cli_runner.go:164] Run: docker container inspect old-k8s-version-707467 --format={{.State.Status}}
	I1101 10:41:34.143113  359640 kic.go:430] container "old-k8s-version-707467" state is running.
	I1101 10:41:34.143536  359640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-707467
	I1101 10:41:34.160762  359640 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/config.json ...
	I1101 10:41:34.161008  359640 machine.go:94] provisionDockerMachine start ...
	I1101 10:41:34.161072  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:34.180900  359640 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:34.181229  359640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 10:41:34.181249  359640 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:41:34.181902  359640 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39636->127.0.0.1:33108: read: connection reset by peer
	I1101 10:41:37.327834  359640 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-707467
	
	I1101 10:41:37.327858  359640 ubuntu.go:182] provisioning hostname "old-k8s-version-707467"
	I1101 10:41:37.327926  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:37.347635  359640 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:37.347886  359640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 10:41:37.347901  359640 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-707467 && echo "old-k8s-version-707467" | sudo tee /etc/hostname
	I1101 10:41:37.502955  359640 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-707467
	
	I1101 10:41:37.503030  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:37.523236  359640 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:37.523449  359640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 10:41:37.523467  359640 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-707467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-707467/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-707467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:41:37.673280  359640 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:41:37.673312  359640 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:41:37.673333  359640 ubuntu.go:190] setting up certificates
	I1101 10:41:37.673345  359640 provision.go:84] configureAuth start
	I1101 10:41:37.673413  359640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-707467
	I1101 10:41:37.694768  359640 provision.go:143] copyHostCerts
	I1101 10:41:37.694816  359640 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:41:37.694828  359640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:41:37.694884  359640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:41:37.694975  359640 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:41:37.694983  359640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:41:37.695005  359640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:41:37.695060  359640 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:41:37.695064  359640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:41:37.695081  359640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:41:37.695124  359640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-707467 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-707467]
	I1101 10:41:37.814523  359640 provision.go:177] copyRemoteCerts
	I1101 10:41:37.814585  359640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:41:37.814628  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:37.835525  359640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/old-k8s-version-707467/id_rsa Username:docker}
	I1101 10:41:37.935105  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:41:37.954431  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:41:37.976284  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:41:37.994734  359640 provision.go:87] duration metric: took 321.372924ms to configureAuth
	I1101 10:41:37.994762  359640 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:41:37.994967  359640 config.go:182] Loaded profile config "old-k8s-version-707467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:41:37.995106  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:38.016008  359640 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:38.016237  359640 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 10:41:38.016256  359640 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:41:38.325879  359640 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:41:38.325907  359640 machine.go:97] duration metric: took 4.16488186s to provisionDockerMachine
	I1101 10:41:38.325925  359640 start.go:293] postStartSetup for "old-k8s-version-707467" (driver="docker")
	I1101 10:41:38.325940  359640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:41:38.326031  359640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:41:38.326095  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:38.347322  359640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/old-k8s-version-707467/id_rsa Username:docker}
	I1101 10:41:38.453991  359640 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:41:38.457531  359640 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:41:38.457566  359640 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:41:38.457580  359640 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:41:38.457643  359640 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:41:38.457739  359640 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:41:38.457833  359640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:41:38.465782  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:41:38.484143  359640 start.go:296] duration metric: took 158.201362ms for postStartSetup
	I1101 10:41:38.484247  359640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:41:38.484292  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:38.503969  359640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/old-k8s-version-707467/id_rsa Username:docker}
	I1101 10:41:38.604914  359640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:41:38.610591  359640 fix.go:56] duration metric: took 4.752393157s for fixHost
	I1101 10:41:38.610626  359640 start.go:83] releasing machines lock for "old-k8s-version-707467", held for 4.752437417s
	I1101 10:41:38.610699  359640 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-707467
	I1101 10:41:38.631602  359640 ssh_runner.go:195] Run: cat /version.json
	I1101 10:41:38.631649  359640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:41:38.631685  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:38.631711  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:37.682369  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:41:37.703988  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 10:41:37.722956  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:41:37.740842  358231 provision.go:87] duration metric: took 555.376129ms to configureAuth
	I1101 10:41:37.740871  358231 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:41:37.741042  358231 config.go:182] Loaded profile config "default-k8s-diff-port-433711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:37.741149  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:37.761997  358231 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:37.762320  358231 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1101 10:41:37.762343  358231 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:41:38.037925  358231 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:41:38.037951  358231 machine.go:97] duration metric: took 4.357823751s to provisionDockerMachine
	I1101 10:41:38.037963  358231 client.go:176] duration metric: took 10.219757649s to LocalClient.Create
	I1101 10:41:38.037984  358231 start.go:167] duration metric: took 10.219820291s to libmachine.API.Create "default-k8s-diff-port-433711"
	I1101 10:41:38.037999  358231 start.go:293] postStartSetup for "default-k8s-diff-port-433711" (driver="docker")
	I1101 10:41:38.038014  358231 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:41:38.038070  358231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:41:38.038117  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:38.056936  358231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa Username:docker}
	I1101 10:41:38.162871  358231 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:41:38.166731  358231 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:41:38.166758  358231 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:41:38.166768  358231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:41:38.166811  358231 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:41:38.166876  358231 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:41:38.166956  358231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:41:38.175000  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:41:38.197484  358231 start.go:296] duration metric: took 159.467911ms for postStartSetup
	I1101 10:41:38.197899  358231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-433711
	I1101 10:41:38.219070  358231 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/config.json ...
	I1101 10:41:38.219317  358231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:41:38.219361  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:38.237337  358231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa Username:docker}
	I1101 10:41:38.341047  358231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:41:38.346478  358231 start.go:128] duration metric: took 10.530756435s to createHost
	I1101 10:41:38.346527  358231 start.go:83] releasing machines lock for "default-k8s-diff-port-433711", held for 10.530928961s
	I1101 10:41:38.346597  358231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-433711
	I1101 10:41:38.366683  358231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:41:38.366789  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:38.366809  358231 ssh_runner.go:195] Run: cat /version.json
	I1101 10:41:38.366846  358231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:41:38.385720  358231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa Username:docker}
	I1101 10:41:38.387292  358231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa Username:docker}
	I1101 10:41:38.583954  358231 ssh_runner.go:195] Run: systemctl --version
	I1101 10:41:38.590984  358231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:41:38.630587  358231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:41:38.635649  358231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:41:38.635729  358231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:41:38.665079  358231 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:41:38.665102  358231 start.go:496] detecting cgroup driver to use...
	I1101 10:41:38.665135  358231 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:41:38.665182  358231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:41:38.683975  358231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:41:38.697552  358231 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:41:38.697604  358231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:41:38.718184  358231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:41:38.741228  358231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:41:38.850035  358231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:41:38.948934  358231 docker.go:234] disabling docker service ...
	I1101 10:41:38.948991  358231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:41:38.983771  358231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:41:39.003071  358231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:41:39.101624  358231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:41:39.205133  358231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:41:39.221433  358231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:41:39.238236  358231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:41:39.238297  358231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.248741  358231 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:41:39.248808  358231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.258376  358231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.268434  358231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.277537  358231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:41:39.287337  358231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.297640  358231 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.312223  358231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.324022  358231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:41:39.335017  358231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:41:39.342531  358231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:39.431715  358231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:41:39.565337  358231 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:41:39.565412  358231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:41:39.570346  358231 start.go:564] Will wait 60s for crictl version
	I1101 10:41:39.570409  358231 ssh_runner.go:195] Run: which crictl
	I1101 10:41:39.574798  358231 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:41:39.608247  358231 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:41:39.608352  358231 ssh_runner.go:195] Run: crio --version
	I1101 10:41:39.635471  358231 ssh_runner.go:195] Run: crio --version
	I1101 10:41:39.669364  358231 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:41:38.651675  359640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/old-k8s-version-707467/id_rsa Username:docker}
	I1101 10:41:38.652330  359640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/old-k8s-version-707467/id_rsa Username:docker}
	I1101 10:41:38.755227  359640 ssh_runner.go:195] Run: systemctl --version
	I1101 10:41:38.831300  359640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:41:38.868627  359640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:41:38.873390  359640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:41:38.873442  359640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:41:38.881546  359640 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:41:38.881573  359640 start.go:496] detecting cgroup driver to use...
	I1101 10:41:38.881610  359640 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:41:38.881683  359640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:41:38.899767  359640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:41:38.913147  359640 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:41:38.914378  359640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:41:38.930305  359640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:41:38.943181  359640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:41:39.052441  359640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:41:39.152938  359640 docker.go:234] disabling docker service ...
	I1101 10:41:39.153022  359640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:41:39.169330  359640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:41:39.184116  359640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:41:39.286713  359640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:41:39.382551  359640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:41:39.395591  359640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:41:39.410571  359640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:41:39.410631  359640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.419670  359640 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:41:39.419740  359640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.429602  359640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.441647  359640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.459398  359640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:41:39.469632  359640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.481615  359640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.492260  359640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:39.504384  359640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:41:39.512634  359640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:41:39.520381  359640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:39.625473  359640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:41:39.734249  359640 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:41:39.734315  359640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:41:39.738574  359640 start.go:564] Will wait 60s for crictl version
	I1101 10:41:39.738625  359640 ssh_runner.go:195] Run: which crictl
	I1101 10:41:39.742618  359640 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:41:39.767246  359640 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:41:39.767320  359640 ssh_runner.go:195] Run: crio --version
	I1101 10:41:39.795675  359640 ssh_runner.go:195] Run: crio --version
	I1101 10:41:39.827366  359640 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1101 10:41:39.828515  359640 cli_runner.go:164] Run: docker network inspect old-k8s-version-707467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:41:39.845443  359640 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 10:41:39.849451  359640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:41:39.859955  359640 kubeadm.go:884] updating cluster {Name:old-k8s-version-707467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-707467 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:41:39.860053  359640 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:41:39.860091  359640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:41:39.899953  359640 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:41:39.899978  359640 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:41:39.900044  359640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:41:39.945127  359640 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:41:39.945152  359640 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:41:39.945162  359640 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1101 10:41:39.945282  359640 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-707467 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-707467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:41:39.945373  359640 ssh_runner.go:195] Run: crio config
	I1101 10:41:40.007953  359640 cni.go:84] Creating CNI manager for ""
	I1101 10:41:40.007971  359640 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:41:40.007983  359640 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:41:40.008006  359640 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-707467 NodeName:old-k8s-version-707467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:41:40.008128  359640 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-707467"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:41:40.008185  359640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:41:40.017386  359640 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:41:40.017435  359640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:41:40.025213  359640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1101 10:41:40.040670  359640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:41:40.054647  359640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1101 10:41:40.068257  359640 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:41:40.072057  359640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:41:40.083016  359640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:40.165112  359640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:41:40.187003  359640 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467 for IP: 192.168.94.2
	I1101 10:41:40.187028  359640 certs.go:195] generating shared ca certs ...
	I1101 10:41:40.187048  359640 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:40.187231  359640 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:41:40.187289  359640 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:41:40.187303  359640 certs.go:257] generating profile certs ...
	I1101 10:41:40.187402  359640 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/client.key
	I1101 10:41:40.187467  359640 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/apiserver.key.6c3152ae
	I1101 10:41:40.187548  359640 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/proxy-client.key
	I1101 10:41:40.187698  359640 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:41:40.187734  359640 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:41:40.187747  359640 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:41:40.187780  359640 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:41:40.187809  359640 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:41:40.187837  359640 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:41:40.187888  359640 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:41:40.188590  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:41:40.209916  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:41:40.230979  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:41:40.249160  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:41:40.270814  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:41:40.296520  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:41:40.314131  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:41:40.331644  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/old-k8s-version-707467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:41:40.349175  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:41:40.366066  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:41:40.383338  359640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:41:40.400761  359640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:41:40.413389  359640 ssh_runner.go:195] Run: openssl version
	I1101 10:41:40.419887  359640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:41:40.428630  359640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:40.432307  359640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:40.432358  359640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:40.473240  359640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:41:40.482718  359640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:41:40.493063  359640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:41:40.497756  359640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:41:40.497819  359640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:41:40.535408  359640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:41:40.544901  359640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:41:40.555510  359640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:41:40.559953  359640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:41:40.560018  359640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:41:40.600445  359640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:41:40.610028  359640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:41:40.614414  359640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:41:40.651942  359640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:41:40.694978  359640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:41:40.744904  359640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:41:40.797838  359640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:41:40.857191  359640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:41:40.916242  359640 kubeadm.go:401] StartCluster: {Name:old-k8s-version-707467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-707467 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:40.916406  359640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:41:40.916474  359640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:41:40.952416  359640 cri.go:89] found id: "db082c42e2322ac77e4c7ac5029613f4fc315ba2c60b168fd3ad9b50ea598e6a"
	I1101 10:41:40.952440  359640 cri.go:89] found id: "c351b883f4c7425bf4220670aefd0ab86d65f31b59b246d15d5a0099457dce03"
	I1101 10:41:40.952446  359640 cri.go:89] found id: "0e2eee682652453663ca05634fbc994a3a996b9febb53a7bbd8e5ba7558b3a22"
	I1101 10:41:40.952451  359640 cri.go:89] found id: "27186a49df0ceda967ebf7847c9ede3092c812946cd2c021b530c97b5dd0302f"
	I1101 10:41:40.952455  359640 cri.go:89] found id: ""
	I1101 10:41:40.952531  359640 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:41:40.969304  359640 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:41:40Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:41:40.969388  359640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:41:40.978833  359640 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:41:40.978896  359640 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:41:40.978949  359640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:41:40.989083  359640 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:41:40.990483  359640 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-707467" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:41:40.991293  359640 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-58021/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-707467" cluster setting kubeconfig missing "old-k8s-version-707467" context setting]
	I1101 10:41:40.992565  359640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:40.994974  359640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:41:41.005883  359640 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1101 10:41:41.005970  359640 kubeadm.go:602] duration metric: took 27.063912ms to restartPrimaryControlPlane
	I1101 10:41:41.005985  359640 kubeadm.go:403] duration metric: took 89.752701ms to StartCluster
	I1101 10:41:41.006005  359640 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:41.006072  359640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:41:41.008082  359640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:41.008356  359640 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:41:41.008624  359640 config.go:182] Loaded profile config "old-k8s-version-707467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:41:41.008673  359640 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:41:41.008765  359640 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-707467"
	I1101 10:41:41.008787  359640 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-707467"
	W1101 10:41:41.008796  359640 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:41:41.008825  359640 host.go:66] Checking if "old-k8s-version-707467" exists ...
	I1101 10:41:41.008864  359640 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-707467"
	I1101 10:41:41.008912  359640 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-707467"
	I1101 10:41:41.008863  359640 addons.go:70] Setting dashboard=true in profile "old-k8s-version-707467"
	I1101 10:41:41.008981  359640 addons.go:239] Setting addon dashboard=true in "old-k8s-version-707467"
	W1101 10:41:41.008991  359640 addons.go:248] addon dashboard should already be in state true
	I1101 10:41:41.009041  359640 host.go:66] Checking if "old-k8s-version-707467" exists ...
	I1101 10:41:41.009295  359640 cli_runner.go:164] Run: docker container inspect old-k8s-version-707467 --format={{.State.Status}}
	I1101 10:41:41.009353  359640 cli_runner.go:164] Run: docker container inspect old-k8s-version-707467 --format={{.State.Status}}
	I1101 10:41:41.009657  359640 cli_runner.go:164] Run: docker container inspect old-k8s-version-707467 --format={{.State.Status}}
	I1101 10:41:41.009935  359640 out.go:179] * Verifying Kubernetes components...
	I1101 10:41:41.015809  359640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:41.036379  359640 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:41:41.037632  359640 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:41:41.037657  359640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:41:41.037736  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:41.042606  359640 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:41:41.042868  359640 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-707467"
	W1101 10:41:41.042886  359640 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:41:41.042918  359640 host.go:66] Checking if "old-k8s-version-707467" exists ...
	I1101 10:41:41.043325  359640 cli_runner.go:164] Run: docker container inspect old-k8s-version-707467 --format={{.State.Status}}
	I1101 10:41:41.044917  359640 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:41:39.670361  358231 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-433711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:41:39.688231  358231 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 10:41:39.692565  358231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:41:39.703027  358231 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-433711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-433711 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:41:39.703165  358231 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:41:39.703233  358231 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:41:39.740007  358231 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:41:39.740028  358231 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:41:39.740066  358231 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:41:39.768005  358231 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:41:39.768027  358231 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:41:39.768038  358231 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1101 10:41:39.768144  358231 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-433711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-433711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:41:39.768223  358231 ssh_runner.go:195] Run: crio config
	I1101 10:41:39.813172  358231 cni.go:84] Creating CNI manager for ""
	I1101 10:41:39.813208  358231 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:41:39.813235  358231 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:41:39.813270  358231 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-433711 NodeName:default-k8s-diff-port-433711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:41:39.813450  358231 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-433711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:41:39.813543  358231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:41:39.821967  358231 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:41:39.822046  358231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:41:39.830210  358231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 10:41:39.843999  358231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:41:39.859362  358231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1101 10:41:39.875189  358231 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:41:39.880052  358231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:41:39.892156  358231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:40.012278  358231 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:41:40.034768  358231 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711 for IP: 192.168.76.2
	I1101 10:41:40.034785  358231 certs.go:195] generating shared ca certs ...
	I1101 10:41:40.034799  358231 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:40.034930  358231 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:41:40.034967  358231 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:41:40.034976  358231 certs.go:257] generating profile certs ...
	I1101 10:41:40.035027  358231 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/client.key
	I1101 10:41:40.035040  358231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/client.crt with IP's: []
	I1101 10:41:40.336345  358231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/client.crt ...
	I1101 10:41:40.336373  358231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/client.crt: {Name:mkb4e687df248177294eb80297067736a7267e6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:40.336543  358231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/client.key ...
	I1101 10:41:40.336562  358231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/client.key: {Name:mkae8b1712576f2b8c0b0aa7c1efa37a360ac26e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:40.336687  358231 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.key.4219e30a
	I1101 10:41:40.336708  358231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.crt.4219e30a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1101 10:41:40.533265  358231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.crt.4219e30a ...
	I1101 10:41:40.533289  358231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.crt.4219e30a: {Name:mk4e74e321ddcd55bfa5eb073a39da92364aa686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:40.533430  358231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.key.4219e30a ...
	I1101 10:41:40.533443  358231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.key.4219e30a: {Name:mkb65748d75d4fcdeaa295c010bb7a371d715584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:40.533526  358231 certs.go:382] copying /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.crt.4219e30a -> /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.crt
	I1101 10:41:40.533602  358231 certs.go:386] copying /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.key.4219e30a -> /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.key
	I1101 10:41:40.533668  358231 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/proxy-client.key
	I1101 10:41:40.533684  358231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/proxy-client.crt with IP's: []
	I1101 10:41:40.767321  358231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/proxy-client.crt ...
	I1101 10:41:40.767350  358231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/proxy-client.crt: {Name:mke06acc5d44227f1ca95d7b52d3997c292080a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:40.767531  358231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/proxy-client.key ...
	I1101 10:41:40.767556  358231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/proxy-client.key: {Name:mke3bcb4ec0825128cd3ac9c3633dfad30c22042 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:40.767786  358231 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:41:40.767824  358231 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:41:40.767831  358231 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:41:40.767854  358231 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:41:40.767873  358231 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:41:40.767902  358231 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:41:40.767935  358231 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:41:40.768654  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:41:40.795384  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:41:40.820642  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:41:40.845653  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:41:40.874007  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 10:41:40.900746  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:41:40.923565  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:41:40.943079  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:41:40.965721  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:41:40.989962  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:41:41.014944  358231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:41:41.049337  358231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:41:41.081843  358231 ssh_runner.go:195] Run: openssl version
	I1101 10:41:41.094135  358231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:41:41.108003  358231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:41.113103  358231 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:41.113210  358231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:41.155599  358231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:41:41.165746  358231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:41:41.174767  358231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:41:41.178519  358231 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:41:41.178566  358231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:41:41.230174  358231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:41:41.243989  358231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:41:41.256218  358231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:41:41.261456  358231 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:41:41.261523  358231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:41:41.316711  358231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:41:41.328023  358231 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:41:41.332173  358231 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:41:41.332238  358231 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-433711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-433711 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:41.332326  358231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:41:41.332380  358231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:41:41.364169  358231 cri.go:89] found id: ""
	I1101 10:41:41.364238  358231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:41:41.372785  358231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:41:41.382321  358231 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 10:41:41.382403  358231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:41:41.390163  358231 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:41:41.390177  358231 kubeadm.go:158] found existing configuration files:
	
	I1101 10:41:41.390216  358231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 10:41:41.398231  358231 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:41:41.398273  358231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:41:41.405978  358231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 10:41:41.413747  358231 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:41:41.413805  358231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:41:41.421188  358231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 10:41:41.429539  358231 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:41:41.429598  358231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:41:41.437891  358231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 10:41:41.446246  358231 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:41:41.446377  358231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:41:41.454006  358231 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 10:41:41.493000  358231 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:41:41.493066  358231 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:41:41.528739  358231 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 10:41:41.528863  358231 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 10:41:41.528913  358231 kubeadm.go:319] OS: Linux
	I1101 10:41:41.528997  358231 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 10:41:41.529079  358231 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 10:41:41.529143  358231 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 10:41:41.529231  358231 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 10:41:41.529316  358231 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 10:41:41.529384  358231 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 10:41:41.529453  358231 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 10:41:41.529527  358231 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 10:41:41.602739  358231 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:41:41.602901  358231 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:41:41.603049  358231 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:41:41.610443  358231 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:41:41.613518  358231 out.go:252]   - Generating certificates and keys ...
	I1101 10:41:41.613642  358231 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:41:41.613748  358231 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:41:41.705487  358231 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:41:41.790787  358231 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:41:42.434708  358231 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:41:42.544585  358231 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:41:39.914833  349346 node_ready.go:49] node "embed-certs-071527" is "Ready"
	I1101 10:41:39.914887  349346 node_ready.go:38] duration metric: took 11.006895765s for node "embed-certs-071527" to be "Ready" ...
	I1101 10:41:39.914906  349346 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:41:39.914993  349346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:41:39.931795  349346 api_server.go:72] duration metric: took 11.349823452s to wait for apiserver process to appear ...
	I1101 10:41:39.931826  349346 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:41:39.931850  349346 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:41:39.942694  349346 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 10:41:39.943797  349346 api_server.go:141] control plane version: v1.34.1
	I1101 10:41:39.943829  349346 api_server.go:131] duration metric: took 11.995661ms to wait for apiserver health ...
	I1101 10:41:39.943840  349346 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:41:39.949219  349346 system_pods.go:59] 8 kube-system pods found
	I1101 10:41:39.949258  349346 system_pods.go:61] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:39.949266  349346 system_pods.go:61] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running
	I1101 10:41:39.949276  349346 system_pods.go:61] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:41:39.949283  349346 system_pods.go:61] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running
	I1101 10:41:39.949294  349346 system_pods.go:61] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running
	I1101 10:41:39.949299  349346 system_pods.go:61] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:41:39.949304  349346 system_pods.go:61] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running
	I1101 10:41:39.949317  349346 system_pods.go:61] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:39.949328  349346 system_pods.go:74] duration metric: took 5.479856ms to wait for pod list to return data ...
	I1101 10:41:39.949341  349346 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:41:39.951942  349346 default_sa.go:45] found service account: "default"
	I1101 10:41:39.951962  349346 default_sa.go:55] duration metric: took 2.614975ms for default service account to be created ...
	I1101 10:41:39.951973  349346 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:41:39.955819  349346 system_pods.go:86] 8 kube-system pods found
	I1101 10:41:39.955889  349346 system_pods.go:89] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:39.955910  349346 system_pods.go:89] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running
	I1101 10:41:39.955924  349346 system_pods.go:89] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:41:39.955937  349346 system_pods.go:89] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running
	I1101 10:41:39.955954  349346 system_pods.go:89] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running
	I1101 10:41:39.955974  349346 system_pods.go:89] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:41:39.955990  349346 system_pods.go:89] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running
	I1101 10:41:39.956013  349346 system_pods.go:89] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:39.956045  349346 retry.go:31] will retry after 233.095458ms: missing components: kube-dns
	I1101 10:41:40.193136  349346 system_pods.go:86] 8 kube-system pods found
	I1101 10:41:40.193173  349346 system_pods.go:89] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:40.193183  349346 system_pods.go:89] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running
	I1101 10:41:40.193192  349346 system_pods.go:89] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:41:40.193197  349346 system_pods.go:89] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running
	I1101 10:41:40.193203  349346 system_pods.go:89] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running
	I1101 10:41:40.193208  349346 system_pods.go:89] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:41:40.193213  349346 system_pods.go:89] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running
	I1101 10:41:40.193218  349346 system_pods.go:89] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Running
	I1101 10:41:40.193229  349346 system_pods.go:126] duration metric: took 241.247888ms to wait for k8s-apps to be running ...
	I1101 10:41:40.193239  349346 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:41:40.193288  349346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:41:40.208018  349346 system_svc.go:56] duration metric: took 14.767032ms WaitForService to wait for kubelet
	I1101 10:41:40.208049  349346 kubeadm.go:587] duration metric: took 11.626084304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:41:40.208071  349346 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:41:40.211684  349346 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:41:40.211713  349346 node_conditions.go:123] node cpu capacity is 8
	I1101 10:41:40.211730  349346 node_conditions.go:105] duration metric: took 3.653431ms to run NodePressure ...
	I1101 10:41:40.211745  349346 start.go:242] waiting for startup goroutines ...
	I1101 10:41:40.211754  349346 start.go:247] waiting for cluster config update ...
	I1101 10:41:40.211767  349346 start.go:256] writing updated cluster config ...
	I1101 10:41:40.212077  349346 ssh_runner.go:195] Run: rm -f paused
	I1101 10:41:40.216196  349346 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:41:40.220214  349346 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5td8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:41.227261  349346 pod_ready.go:94] pod "coredns-66bc5c9577-c5td8" is "Ready"
	I1101 10:41:41.227294  349346 pod_ready.go:86] duration metric: took 1.007056303s for pod "coredns-66bc5c9577-c5td8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:41.231542  349346 pod_ready.go:83] waiting for pod "etcd-embed-certs-071527" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:41.239425  349346 pod_ready.go:94] pod "etcd-embed-certs-071527" is "Ready"
	I1101 10:41:41.239464  349346 pod_ready.go:86] duration metric: took 7.861155ms for pod "etcd-embed-certs-071527" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:41.242130  349346 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-071527" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:41.248488  349346 pod_ready.go:94] pod "kube-apiserver-embed-certs-071527" is "Ready"
	I1101 10:41:41.248531  349346 pod_ready.go:86] duration metric: took 6.377772ms for pod "kube-apiserver-embed-certs-071527" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:41.251741  349346 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-071527" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:41.424741  349346 pod_ready.go:94] pod "kube-controller-manager-embed-certs-071527" is "Ready"
	I1101 10:41:41.424771  349346 pod_ready.go:86] duration metric: took 173.001521ms for pod "kube-controller-manager-embed-certs-071527" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:41.624624  349346 pod_ready.go:83] waiting for pod "kube-proxy-l5pzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:42.023763  349346 pod_ready.go:94] pod "kube-proxy-l5pzc" is "Ready"
	I1101 10:41:42.023793  349346 pod_ready.go:86] duration metric: took 399.143044ms for pod "kube-proxy-l5pzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:42.224220  349346 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-071527" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:42.624358  349346 pod_ready.go:94] pod "kube-scheduler-embed-certs-071527" is "Ready"
	I1101 10:41:42.624387  349346 pod_ready.go:86] duration metric: took 400.140672ms for pod "kube-scheduler-embed-certs-071527" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:41:42.624398  349346 pod_ready.go:40] duration metric: took 2.408172334s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:41:42.680778  349346 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:41:42.682148  349346 out.go:179] * Done! kubectl is now configured to use "embed-certs-071527" cluster and "default" namespace by default
	I1101 10:41:41.046254  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:41:41.046316  359640 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:41:41.046428  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:41.064919  359640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/old-k8s-version-707467/id_rsa Username:docker}
	I1101 10:41:41.075669  359640 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:41:41.075698  359640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:41:41.075761  359640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:41:41.078418  359640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/old-k8s-version-707467/id_rsa Username:docker}
	I1101 10:41:41.105076  359640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/old-k8s-version-707467/id_rsa Username:docker}
	I1101 10:41:41.200899  359640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:41:41.201622  359640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:41:41.202606  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:41:41.202629  359640 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:41:41.220956  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:41:41.220977  359640 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:41:41.229037  359640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:41:41.243470  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:41:41.243507  359640 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:41:41.263118  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:41:41.263144  359640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:41:41.282120  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:41:41.282153  359640 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:41:41.306552  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:41:41.306583  359640 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:41:41.329403  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:41:41.329429  359640 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:41:41.343519  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:41:41.343545  359640 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:41:41.357078  359640 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:41:41.357131  359640 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:41:41.371679  359640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:41:44.314754  359640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.113817305s)
	I1101 10:41:44.314809  359640 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.113159456s)
	I1101 10:41:44.314852  359640 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-707467" to be "Ready" ...
	I1101 10:41:44.314869  359640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.085797897s)
	I1101 10:41:44.324421  359640 node_ready.go:49] node "old-k8s-version-707467" is "Ready"
	I1101 10:41:44.324455  359640 node_ready.go:38] duration metric: took 9.581234ms for node "old-k8s-version-707467" to be "Ready" ...
	I1101 10:41:44.324472  359640 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:41:44.324554  359640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:41:44.615485  359640 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.24376065s)
	I1101 10:41:44.615699  359640 api_server.go:72] duration metric: took 3.60730482s to wait for apiserver process to appear ...
	I1101 10:41:44.615725  359640 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:41:44.615747  359640 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:41:44.621889  359640 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-707467 addons enable metrics-server
	
	I1101 10:41:44.624071  359640 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1101 10:41:42.854900  358231 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:41:42.855073  358231 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-433711 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:41:43.306889  358231 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:41:43.307123  358231 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-433711 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1101 10:41:43.752374  358231 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:41:43.877565  358231 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:41:44.277192  358231 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:41:44.277429  358231 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:41:44.343536  358231 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:41:44.575781  358231 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:41:44.821831  358231 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:41:45.066117  358231 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:41:45.240802  358231 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:41:45.241767  358231 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:41:45.246171  358231 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:41:45.247520  358231 out.go:252]   - Booting up control plane ...
	I1101 10:41:45.247626  358231 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:41:45.247716  358231 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:41:45.248542  358231 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:41:45.280307  358231 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:41:45.280472  358231 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:41:45.288726  358231 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:41:45.291019  358231 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:41:45.291106  358231 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:41:45.409251  358231 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:41:45.409418  358231 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:41:46.410722  358231 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001552367s
	I1101 10:41:46.414039  358231 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:41:46.414165  358231 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1101 10:41:46.414286  358231 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:41:46.414411  358231 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:41:44.624071  359640 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1101 10:41:44.624357  359640 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1101 10:41:44.630619  359640 addons.go:515] duration metric: took 3.621934996s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 10:41:45.116186  359640 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 10:41:45.120415  359640 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 10:41:45.122024  359640 api_server.go:141] control plane version: v1.28.0
	I1101 10:41:45.122050  359640 api_server.go:131] duration metric: took 506.318988ms to wait for apiserver health ...
	I1101 10:41:45.122060  359640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:41:45.125514  359640 system_pods.go:59] 8 kube-system pods found
	I1101 10:41:45.125555  359640 system_pods.go:61] "coredns-5dd5756b68-9fdk6" [e43bd16e-e22d-4c91-88ec-652fe391b4f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:45.125564  359640 system_pods.go:61] "etcd-old-k8s-version-707467" [ef1fa7c6-d526-427b-bd21-26b22c575da3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:41:45.125573  359640 system_pods.go:61] "kindnet-xxlgz" [cf757ff2-e0ef-43e8-97e9-44b145900bf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:41:45.125582  359640 system_pods.go:61] "kube-apiserver-old-k8s-version-707467" [fe63902d-d8ba-43e8-b891-f2dca076594c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:41:45.125591  359640 system_pods.go:61] "kube-controller-manager-old-k8s-version-707467" [3c4a04cb-a001-47b9-b78c-271374ec1444] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:41:45.125597  359640 system_pods.go:61] "kube-proxy-2pbws" [f553a3e8-f065-4723-8a39-2fee4a395d45] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:41:45.125602  359640 system_pods.go:61] "kube-scheduler-old-k8s-version-707467" [b42a650a-bec0-46e1-b6ab-c1c33a4adea2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:41:45.125607  359640 system_pods.go:61] "storage-provisioner" [476c3eb5-e771-4963-ac52-b3786e841080] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:45.125616  359640 system_pods.go:74] duration metric: took 3.550233ms to wait for pod list to return data ...
	I1101 10:41:45.125626  359640 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:41:45.127525  359640 default_sa.go:45] found service account: "default"
	I1101 10:41:45.127551  359640 default_sa.go:55] duration metric: took 1.918084ms for default service account to be created ...
	I1101 10:41:45.127562  359640 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:41:45.130743  359640 system_pods.go:86] 8 kube-system pods found
	I1101 10:41:45.130771  359640 system_pods.go:89] "coredns-5dd5756b68-9fdk6" [e43bd16e-e22d-4c91-88ec-652fe391b4f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:45.130782  359640 system_pods.go:89] "etcd-old-k8s-version-707467" [ef1fa7c6-d526-427b-bd21-26b22c575da3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:41:45.130795  359640 system_pods.go:89] "kindnet-xxlgz" [cf757ff2-e0ef-43e8-97e9-44b145900bf5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:41:45.130805  359640 system_pods.go:89] "kube-apiserver-old-k8s-version-707467" [fe63902d-d8ba-43e8-b891-f2dca076594c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:41:45.130817  359640 system_pods.go:89] "kube-controller-manager-old-k8s-version-707467" [3c4a04cb-a001-47b9-b78c-271374ec1444] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:41:45.130828  359640 system_pods.go:89] "kube-proxy-2pbws" [f553a3e8-f065-4723-8a39-2fee4a395d45] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:41:45.130840  359640 system_pods.go:89] "kube-scheduler-old-k8s-version-707467" [b42a650a-bec0-46e1-b6ab-c1c33a4adea2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:41:45.130847  359640 system_pods.go:89] "storage-provisioner" [476c3eb5-e771-4963-ac52-b3786e841080] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:45.130866  359640 system_pods.go:126] duration metric: took 3.297321ms to wait for k8s-apps to be running ...
	I1101 10:41:45.130878  359640 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:41:45.130926  359640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:41:45.144124  359640 system_svc.go:56] duration metric: took 13.236882ms WaitForService to wait for kubelet
	I1101 10:41:45.144151  359640 kubeadm.go:587] duration metric: took 4.135760554s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:41:45.144171  359640 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:41:45.146692  359640 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:41:45.146716  359640 node_conditions.go:123] node cpu capacity is 8
	I1101 10:41:45.146727  359640 node_conditions.go:105] duration metric: took 2.551739ms to run NodePressure ...
	I1101 10:41:45.146738  359640 start.go:242] waiting for startup goroutines ...
	I1101 10:41:45.146744  359640 start.go:247] waiting for cluster config update ...
	I1101 10:41:45.146755  359640 start.go:256] writing updated cluster config ...
	I1101 10:41:45.146997  359640 ssh_runner.go:195] Run: rm -f paused
	I1101 10:41:45.150742  359640 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:41:45.155339  359640 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-9fdk6" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:41:47.160767  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	I1101 10:41:48.350161  358231 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.936089467s
	I1101 10:41:48.669878  358231 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.25579597s
	I1101 10:41:49.915608  358231 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501437821s
	I1101 10:41:49.926155  358231 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:41:49.935625  358231 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:41:49.944368  358231 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:41:49.944693  358231 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-433711 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:41:49.956055  358231 kubeadm.go:319] [bootstrap-token] Using token: qi4g7e.okcwxybxyq25j58d
	I1101 10:41:49.957425  358231 out.go:252]   - Configuring RBAC rules ...
	I1101 10:41:49.957590  358231 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:41:49.961386  358231 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:41:49.966514  358231 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:41:49.968792  358231 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:41:49.971210  358231 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:41:49.973641  358231 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:41:50.322340  358231 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:41:50.739699  358231 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:41:51.320986  358231 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:41:51.322169  358231 kubeadm.go:319] 
	I1101 10:41:51.322274  358231 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:41:51.322283  358231 kubeadm.go:319] 
	I1101 10:41:51.322396  358231 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:41:51.322421  358231 kubeadm.go:319] 
	I1101 10:41:51.322478  358231 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:41:51.322593  358231 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:41:51.322676  358231 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:41:51.322690  358231 kubeadm.go:319] 
	I1101 10:41:51.322769  358231 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:41:51.322783  358231 kubeadm.go:319] 
	I1101 10:41:51.322827  358231 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:41:51.322833  358231 kubeadm.go:319] 
	I1101 10:41:51.322876  358231 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:41:51.322943  358231 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:41:51.323007  358231 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:41:51.323013  358231 kubeadm.go:319] 
	I1101 10:41:51.323088  358231 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:41:51.323165  358231 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:41:51.323174  358231 kubeadm.go:319] 
	I1101 10:41:51.323265  358231 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token qi4g7e.okcwxybxyq25j58d \
	I1101 10:41:51.323360  358231 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:940bb8e1f96ef3c88df818902bd8202f25d19108c9c93fa4896a1f509b4cfb64 \
	I1101 10:41:51.323389  358231 kubeadm.go:319] 	--control-plane 
	I1101 10:41:51.323396  358231 kubeadm.go:319] 
	I1101 10:41:51.323477  358231 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:41:51.323489  358231 kubeadm.go:319] 
	I1101 10:41:51.323596  358231 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token qi4g7e.okcwxybxyq25j58d \
	I1101 10:41:51.323717  358231 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:940bb8e1f96ef3c88df818902bd8202f25d19108c9c93fa4896a1f509b4cfb64 
	I1101 10:41:51.326854  358231 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 10:41:51.327014  358231 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:41:51.327047  358231 cni.go:84] Creating CNI manager for ""
	I1101 10:41:51.327056  358231 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:41:51.329439  358231 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 10:41:51.330576  358231 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 10:41:51.335265  358231 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:41:51.335281  358231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 10:41:51.348843  358231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:41:51.554848  358231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:41:51.554959  358231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:51.554981  358231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-433711 minikube.k8s.io/updated_at=2025_11_01T10_41_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=default-k8s-diff-port-433711 minikube.k8s.io/primary=true
	I1101 10:41:51.565901  358231 ops.go:34] apiserver oom_adj: -16
	I1101 10:41:51.631846  358231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:52.131918  358231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1101 10:41:49.161330  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:41:51.660896  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 10:41:39 embed-certs-071527 crio[781]: time="2025-11-01T10:41:39.921974139Z" level=info msg="Starting container: df4d4e640c3b00a2e5979ecd5e024ddab57773924a942433f936eaf3d765338d" id=04da0c8d-7436-4e28-bae2-ef780a1ce704 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:41:39 embed-certs-071527 crio[781]: time="2025-11-01T10:41:39.924295138Z" level=info msg="Started container" PID=1824 containerID=df4d4e640c3b00a2e5979ecd5e024ddab57773924a942433f936eaf3d765338d description=kube-system/coredns-66bc5c9577-c5td8/coredns id=04da0c8d-7436-4e28-bae2-ef780a1ce704 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d5be258af609144184fd82f2e2d26ac1976f9b46686b33f7fe573d7b58cce5d
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.163740145Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4aa421ae-674c-4818-94b5-67bb18fd37b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.163857652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.169722867Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:71bb8591e16cddb6d192be9e7dab629ba3b165e707921bd1d9d5f188c1eb831e UID:38d217fc-2e74-49ba-9a94-b40059463772 NetNS:/var/run/netns/79d1512f-caca-4526-9155-ba173f17cd9c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003a05e8}] Aliases:map[]}"
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.169754776Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.179829019Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:71bb8591e16cddb6d192be9e7dab629ba3b165e707921bd1d9d5f188c1eb831e UID:38d217fc-2e74-49ba-9a94-b40059463772 NetNS:/var/run/netns/79d1512f-caca-4526-9155-ba173f17cd9c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003a05e8}] Aliases:map[]}"
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.17996091Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.180653287Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.181635391Z" level=info msg="Ran pod sandbox 71bb8591e16cddb6d192be9e7dab629ba3b165e707921bd1d9d5f188c1eb831e with infra container: default/busybox/POD" id=4aa421ae-674c-4818-94b5-67bb18fd37b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.182880159Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e2610587-239c-4cca-977c-5998c1af3cf7 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.183012127Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e2610587-239c-4cca-977c-5998c1af3cf7 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.183066824Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e2610587-239c-4cca-977c-5998c1af3cf7 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.183822566Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=43840697-ad3f-48b8-8f33-f949bb993df5 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:41:43 embed-certs-071527 crio[781]: time="2025-11-01T10:41:43.185946063Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.337483669Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=43840697-ad3f-48b8-8f33-f949bb993df5 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.338294621Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cec0914d-5c1b-41f7-a8b5-8f976ce9d836 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.339605388Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a7a643db-1e45-4b3a-964b-56b71bf6ce52 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.343135871Z" level=info msg="Creating container: default/busybox/busybox" id=4092c252-03f3-4a11-8cd1-4f722d2a99b3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.3432833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.347593419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.348003888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.379534987Z" level=info msg="Created container 6ed5bc06b29e1ed8b81d447895910b8ef6c8f1e3eab70d43cb0c5883c7209c07: default/busybox/busybox" id=4092c252-03f3-4a11-8cd1-4f722d2a99b3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.380183858Z" level=info msg="Starting container: 6ed5bc06b29e1ed8b81d447895910b8ef6c8f1e3eab70d43cb0c5883c7209c07" id=5521ebeb-4203-4723-91d3-a2f54c461be4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:41:45 embed-certs-071527 crio[781]: time="2025-11-01T10:41:45.381991687Z" level=info msg="Started container" PID=1900 containerID=6ed5bc06b29e1ed8b81d447895910b8ef6c8f1e3eab70d43cb0c5883c7209c07 description=default/busybox/busybox id=5521ebeb-4203-4723-91d3-a2f54c461be4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71bb8591e16cddb6d192be9e7dab629ba3b165e707921bd1d9d5f188c1eb831e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	6ed5bc06b29e1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   71bb8591e16cd       busybox                                      default
	df4d4e640c3b0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago      Running             coredns                   0                   4d5be258af609       coredns-66bc5c9577-c5td8                     kube-system
	39fb24c2491a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   82d30dca4c66b       storage-provisioner                          kube-system
	5cdc2e78dfbf4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   617e45ff66efe       kube-proxy-l5pzc                             kube-system
	b28d674067930       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   d95b4a92f707e       kindnet-m4vzv                                kube-system
	de017d0b5e873       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   99e01f423f5c1       kube-apiserver-embed-certs-071527            kube-system
	60dc9607c080e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   30f98e9bf061d       kube-controller-manager-embed-certs-071527   kube-system
	430627c037585       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   fa8d05cb8b924       etcd-embed-certs-071527                      kube-system
	acdd06d411181       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   66088fda35116       kube-scheduler-embed-certs-071527            kube-system
	
	
	==> coredns [df4d4e640c3b00a2e5979ecd5e024ddab57773924a942433f936eaf3d765338d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45173 - 45384 "HINFO IN 2574896356004953888.7404491889795186743. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.034432705s
	
	
	==> describe nodes <==
	Name:               embed-certs-071527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-071527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=embed-certs-071527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-071527
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:41:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:41:53 +0000   Sat, 01 Nov 2025 10:41:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:41:53 +0000   Sat, 01 Nov 2025 10:41:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:41:53 +0000   Sat, 01 Nov 2025 10:41:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:41:53 +0000   Sat, 01 Nov 2025 10:41:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-071527
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0f044f7b-0834-4e21-aea6-e7dd72693606
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-c5td8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-071527                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-m4vzv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-071527             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-071527    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-l5pzc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-071527             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node embed-certs-071527 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node embed-certs-071527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node embed-certs-071527 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node embed-certs-071527 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node embed-certs-071527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node embed-certs-071527 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node embed-certs-071527 event: Registered Node embed-certs-071527 in Controller
	  Normal  NodeReady                15s                kubelet          Node embed-certs-071527 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[Nov 1 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	
	
	==> etcd [430627c037585833dbe8a1cae20a64b458f572dc4cf0c926eb42ead95ebea761] <==
	{"level":"warn","ts":"2025-11-01T10:41:19.881017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.888634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.897426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.905370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.915853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.925142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.934039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.942540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.952648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.962854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.971190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.988062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:19.995282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.003743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.011163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.019608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.026748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.035241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.043015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.050931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.059337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.067199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.087096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.093603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:20.152130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34696","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:41:54 up  2:24,  0 user,  load average: 4.63, 3.77, 2.41
	Linux embed-certs-071527 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b28d674067930e5b6b57f5b438df0f1b6d984a6ecfb3cd5a0671156591673199] <==
	I1101 10:41:29.056300       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:41:29.056588       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 10:41:29.056779       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:41:29.056803       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:41:29.056831       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:41:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:41:29.280370       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:41:29.280434       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:41:29.280449       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:41:29.280621       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:41:29.653536       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:41:29.653576       1 metrics.go:72] Registering metrics
	I1101 10:41:29.653833       1 controller.go:711] "Syncing nftables rules"
	I1101 10:41:39.283599       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:41:39.283669       1 main.go:301] handling current node
	I1101 10:41:49.281313       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:41:49.281344       1 main.go:301] handling current node
	
	
	==> kube-apiserver [de017d0b5e8739d4b791c1ae357f68ab4c3a8b00cd85b5aead627ede35b49bf2] <==
	I1101 10:41:20.662165       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	E1101 10:41:20.662205       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1101 10:41:20.662117       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:41:20.667517       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:41:20.674343       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:41:20.683817       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:20.865642       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:41:21.564792       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:41:21.568973       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:41:21.568994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:41:22.087300       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:41:22.131115       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:41:22.268921       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:41:22.275026       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1101 10:41:22.276238       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:41:22.281553       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:41:22.591998       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:41:23.066738       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:41:23.077405       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:41:23.085679       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:41:28.293234       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 10:41:28.343881       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:41:28.445958       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:28.450859       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1101 10:41:52.946706       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:58830: use of closed network connection
	
	
	==> kube-controller-manager [60dc9607c080e236f49ba048f12a0900c16b455e9315c85b8b6911d9c9390d88] <==
	I1101 10:41:27.590951       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:41:27.591043       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:41:27.591155       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:41:27.591155       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:41:27.591222       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:41:27.592106       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:41:27.592120       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:41:27.592148       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:41:27.592220       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:41:27.592246       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:41:27.592399       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:41:27.592975       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:41:27.593543       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:41:27.593839       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:41:27.594739       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:41:27.595866       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:41:27.596936       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:41:27.601168       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:41:27.606379       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:41:27.613746       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:41:27.629527       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:41:27.641437       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:41:27.641462       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:41:27.641471       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:41:42.593124       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5cdc2e78dfbf442621882ae48d6b29c5fb70637cad940439c216f15ce8ebc129] <==
	I1101 10:41:28.873878       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:41:28.955719       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:41:29.056490       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:41:29.056597       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 10:41:29.056730       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:41:29.079925       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:41:29.079990       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:41:29.086226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:41:29.086693       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:41:29.086733       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:41:29.088251       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:41:29.088335       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:41:29.088397       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:41:29.088643       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:41:29.088356       1 config.go:309] "Starting node config controller"
	I1101 10:41:29.088823       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:41:29.088843       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:41:29.088281       1 config.go:200] "Starting service config controller"
	I1101 10:41:29.088857       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:41:29.189054       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:41:29.189161       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:41:29.189174       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [acdd06d411181ce68a577422cc7948c3c5454c926f6eba85ebec98e859e59529] <==
	E1101 10:41:20.618332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:41:20.618604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:41:20.618955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:41:20.619099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:41:20.619114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:41:20.619239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:41:20.619315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:41:20.619417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:41:20.619432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:41:20.619509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:41:20.619592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:41:20.619843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:41:20.619902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:41:21.426236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:41:21.438718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:41:21.449205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:41:21.590952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:41:21.604124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:41:21.660798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:41:21.706110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:41:21.709271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:41:21.719933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:41:21.723574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:41:21.881306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 10:41:23.716075       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:41:23 embed-certs-071527 kubelet[1296]: I1101 10:41:23.956224    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-071527" podStartSLOduration=0.956201192 podStartE2EDuration="956.201192ms" podCreationTimestamp="2025-11-01 10:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:23.95610833 +0000 UTC m=+1.133778464" watchObservedRunningTime="2025-11-01 10:41:23.956201192 +0000 UTC m=+1.133871325"
	Nov 01 10:41:23 embed-certs-071527 kubelet[1296]: I1101 10:41:23.975088    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-071527" podStartSLOduration=0.975067409 podStartE2EDuration="975.067409ms" podCreationTimestamp="2025-11-01 10:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:23.96557616 +0000 UTC m=+1.143246296" watchObservedRunningTime="2025-11-01 10:41:23.975067409 +0000 UTC m=+1.152737544"
	Nov 01 10:41:23 embed-certs-071527 kubelet[1296]: I1101 10:41:23.975298    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-071527" podStartSLOduration=0.975268891 podStartE2EDuration="975.268891ms" podCreationTimestamp="2025-11-01 10:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:23.97501847 +0000 UTC m=+1.152688606" watchObservedRunningTime="2025-11-01 10:41:23.975268891 +0000 UTC m=+1.152939026"
	Nov 01 10:41:24 embed-certs-071527 kubelet[1296]: I1101 10:41:24.001737    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-071527" podStartSLOduration=1.001715499 podStartE2EDuration="1.001715499s" podCreationTimestamp="2025-11-01 10:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:23.986520792 +0000 UTC m=+1.164190918" watchObservedRunningTime="2025-11-01 10:41:24.001715499 +0000 UTC m=+1.179385634"
	Nov 01 10:41:27 embed-certs-071527 kubelet[1296]: I1101 10:41:27.561684    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:41:27 embed-certs-071527 kubelet[1296]: I1101 10:41:27.562448    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.337786    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d6bc572-4a6b-44f1-988f-6aa83896b936-kube-proxy\") pod \"kube-proxy-l5pzc\" (UID: \"0d6bc572-4a6b-44f1-988f-6aa83896b936\") " pod="kube-system/kube-proxy-l5pzc"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.337841    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca8c842c-8f8c-46c9-844e-fa29b8bec68b-xtables-lock\") pod \"kindnet-m4vzv\" (UID: \"ca8c842c-8f8c-46c9-844e-fa29b8bec68b\") " pod="kube-system/kindnet-m4vzv"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.337875    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8rdf\" (UniqueName: \"kubernetes.io/projected/ca8c842c-8f8c-46c9-844e-fa29b8bec68b-kube-api-access-d8rdf\") pod \"kindnet-m4vzv\" (UID: \"ca8c842c-8f8c-46c9-844e-fa29b8bec68b\") " pod="kube-system/kindnet-m4vzv"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.337922    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d6bc572-4a6b-44f1-988f-6aa83896b936-xtables-lock\") pod \"kube-proxy-l5pzc\" (UID: \"0d6bc572-4a6b-44f1-988f-6aa83896b936\") " pod="kube-system/kube-proxy-l5pzc"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.337973    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6bc572-4a6b-44f1-988f-6aa83896b936-lib-modules\") pod \"kube-proxy-l5pzc\" (UID: \"0d6bc572-4a6b-44f1-988f-6aa83896b936\") " pod="kube-system/kube-proxy-l5pzc"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.338000    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjn7k\" (UniqueName: \"kubernetes.io/projected/0d6bc572-4a6b-44f1-988f-6aa83896b936-kube-api-access-qjn7k\") pod \"kube-proxy-l5pzc\" (UID: \"0d6bc572-4a6b-44f1-988f-6aa83896b936\") " pod="kube-system/kube-proxy-l5pzc"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.338025    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ca8c842c-8f8c-46c9-844e-fa29b8bec68b-cni-cfg\") pod \"kindnet-m4vzv\" (UID: \"ca8c842c-8f8c-46c9-844e-fa29b8bec68b\") " pod="kube-system/kindnet-m4vzv"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.338097    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca8c842c-8f8c-46c9-844e-fa29b8bec68b-lib-modules\") pod \"kindnet-m4vzv\" (UID: \"ca8c842c-8f8c-46c9-844e-fa29b8bec68b\") " pod="kube-system/kindnet-m4vzv"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.980753    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l5pzc" podStartSLOduration=0.980733779 podStartE2EDuration="980.733779ms" podCreationTimestamp="2025-11-01 10:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:28.980625722 +0000 UTC m=+6.158295857" watchObservedRunningTime="2025-11-01 10:41:28.980733779 +0000 UTC m=+6.158403915"
	Nov 01 10:41:28 embed-certs-071527 kubelet[1296]: I1101 10:41:28.980875    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m4vzv" podStartSLOduration=0.980869048 podStartE2EDuration="980.869048ms" podCreationTimestamp="2025-11-01 10:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:28.969661544 +0000 UTC m=+6.147331694" watchObservedRunningTime="2025-11-01 10:41:28.980869048 +0000 UTC m=+6.158539184"
	Nov 01 10:41:39 embed-certs-071527 kubelet[1296]: I1101 10:41:39.527693    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:41:39 embed-certs-071527 kubelet[1296]: I1101 10:41:39.618861    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ff05c619-0eb3-487b-91e5-6e63996f8329-tmp\") pod \"storage-provisioner\" (UID: \"ff05c619-0eb3-487b-91e5-6e63996f8329\") " pod="kube-system/storage-provisioner"
	Nov 01 10:41:39 embed-certs-071527 kubelet[1296]: I1101 10:41:39.618964    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2x2l\" (UniqueName: \"kubernetes.io/projected/8b884210-c20d-49e8-a595-b5d5e54a2362-kube-api-access-q2x2l\") pod \"coredns-66bc5c9577-c5td8\" (UID: \"8b884210-c20d-49e8-a595-b5d5e54a2362\") " pod="kube-system/coredns-66bc5c9577-c5td8"
	Nov 01 10:41:39 embed-certs-071527 kubelet[1296]: I1101 10:41:39.619038    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzrp8\" (UniqueName: \"kubernetes.io/projected/ff05c619-0eb3-487b-91e5-6e63996f8329-kube-api-access-bzrp8\") pod \"storage-provisioner\" (UID: \"ff05c619-0eb3-487b-91e5-6e63996f8329\") " pod="kube-system/storage-provisioner"
	Nov 01 10:41:39 embed-certs-071527 kubelet[1296]: I1101 10:41:39.619065    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b884210-c20d-49e8-a595-b5d5e54a2362-config-volume\") pod \"coredns-66bc5c9577-c5td8\" (UID: \"8b884210-c20d-49e8-a595-b5d5e54a2362\") " pod="kube-system/coredns-66bc5c9577-c5td8"
	Nov 01 10:41:39 embed-certs-071527 kubelet[1296]: I1101 10:41:39.987823    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.987801274 podStartE2EDuration="10.987801274s" podCreationTimestamp="2025-11-01 10:41:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:39.987618131 +0000 UTC m=+17.165288267" watchObservedRunningTime="2025-11-01 10:41:39.987801274 +0000 UTC m=+17.165471409"
	Nov 01 10:41:40 embed-certs-071527 kubelet[1296]: I1101 10:41:40.000585    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-c5td8" podStartSLOduration=12.000563693 podStartE2EDuration="12.000563693s" podCreationTimestamp="2025-11-01 10:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:39.999406182 +0000 UTC m=+17.177076317" watchObservedRunningTime="2025-11-01 10:41:40.000563693 +0000 UTC m=+17.178233828"
	Nov 01 10:41:42 embed-certs-071527 kubelet[1296]: I1101 10:41:42.942071    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vbxl\" (UniqueName: \"kubernetes.io/projected/38d217fc-2e74-49ba-9a94-b40059463772-kube-api-access-2vbxl\") pod \"busybox\" (UID: \"38d217fc-2e74-49ba-9a94-b40059463772\") " pod="default/busybox"
	Nov 01 10:41:46 embed-certs-071527 kubelet[1296]: I1101 10:41:46.005650    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.849907967 podStartE2EDuration="4.005625186s" podCreationTimestamp="2025-11-01 10:41:42 +0000 UTC" firstStartedPulling="2025-11-01 10:41:43.183385456 +0000 UTC m=+20.361055582" lastFinishedPulling="2025-11-01 10:41:45.339102671 +0000 UTC m=+22.516772801" observedRunningTime="2025-11-01 10:41:46.005406136 +0000 UTC m=+23.183076272" watchObservedRunningTime="2025-11-01 10:41:46.005625186 +0000 UTC m=+23.183295320"
	
	
	==> storage-provisioner [39fb24c2491a02d2217c00e1d77394026ba6415336b473ceba698d0b12f6c8ac] <==
	I1101 10:41:39.938340       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:41:39.949875       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:41:39.949995       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:41:39.952453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:39.959090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:41:39.959332       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:41:39.959414       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69b6d152-a957-4062-98ba-dd505cbb377c", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-071527_a5ed6387-f34f-4b58-b2cf-3a1c46a469fb became leader
	I1101 10:41:39.959480       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-071527_a5ed6387-f34f-4b58-b2cf-3a1c46a469fb!
	W1101 10:41:39.962461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:39.968285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:41:40.059797       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-071527_a5ed6387-f34f-4b58-b2cf-3a1c46a469fb!
	W1101 10:41:41.971988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:41.977423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:43.981145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:43.985225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:45.990007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:45.995646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:47.999602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:48.005035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:50.008553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:50.012085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:52.015571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:52.019395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:54.023423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:41:54.027747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-071527 -n embed-certs-071527
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-071527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-753486 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-753486 --alsologtostderr -v=1: exit status 80 (1.737800143s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-753486 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:42:35.659464  371612 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:35.659761  371612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:35.659772  371612 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:35.659776  371612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:35.659969  371612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:42:35.660188  371612 out.go:368] Setting JSON to false
	I1101 10:42:35.660227  371612 mustload.go:66] Loading cluster: no-preload-753486
	I1101 10:42:35.660596  371612 config.go:182] Loaded profile config "no-preload-753486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:35.661029  371612 cli_runner.go:164] Run: docker container inspect no-preload-753486 --format={{.State.Status}}
	I1101 10:42:35.678765  371612 host.go:66] Checking if "no-preload-753486" exists ...
	I1101 10:42:35.679014  371612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:35.738640  371612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-01 10:42:35.728081901 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:35.739251  371612 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-753486 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:42:35.740749  371612 out.go:179] * Pausing node no-preload-753486 ... 
	I1101 10:42:35.741862  371612 host.go:66] Checking if "no-preload-753486" exists ...
	I1101 10:42:35.742134  371612 ssh_runner.go:195] Run: systemctl --version
	I1101 10:42:35.742183  371612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-753486
	I1101 10:42:35.759733  371612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/no-preload-753486/id_rsa Username:docker}
	I1101 10:42:35.857209  371612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:35.869685  371612 pause.go:52] kubelet running: true
	I1101 10:42:35.869753  371612 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:42:36.036478  371612 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:42:36.036574  371612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:42:36.101809  371612 cri.go:89] found id: "76c50ac8eb959c217b4973a9f7c453efe0965e09cddd683d81715e60193be21c"
	I1101 10:42:36.101832  371612 cri.go:89] found id: "0586fe11d7a4a071268a355a4ef698fd26332adf54529b8328586db399b5fffd"
	I1101 10:42:36.101836  371612 cri.go:89] found id: "74bcc4b75d2f266c94d273e33d106a067fc360c72037464209751efb3f223507"
	I1101 10:42:36.101839  371612 cri.go:89] found id: "59db1705ea900c5162380c88e4070278fb31d77328f589bfb883ac36648a8ddd"
	I1101 10:42:36.101842  371612 cri.go:89] found id: "171c4eb221865cdd52d74e6f620b831af6e5394cf091a1eb5a93396a818ccd67"
	I1101 10:42:36.101847  371612 cri.go:89] found id: "84b6025b4eb5c817b09731471e31eff341dd6e3ddafe2af270b933c44ec0b51e"
	I1101 10:42:36.101851  371612 cri.go:89] found id: "fb6589d637b145d192fbf2e4239b9fbb2482d88501af07f042fb1dd618dc43f5"
	I1101 10:42:36.101854  371612 cri.go:89] found id: "6a2b42f9da1f25e14c5acbd289b7642c05c2c582183b501acc14800027b8bcd7"
	I1101 10:42:36.101859  371612 cri.go:89] found id: "d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2"
	I1101 10:42:36.101887  371612 cri.go:89] found id: "4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be"
	I1101 10:42:36.101896  371612 cri.go:89] found id: ""
	I1101 10:42:36.101939  371612 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:42:36.114049  371612 retry.go:31] will retry after 278.201371ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:36Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:36.392573  371612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:36.405814  371612 pause.go:52] kubelet running: false
	I1101 10:42:36.405864  371612 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:42:36.549872  371612 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:42:36.549971  371612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:42:36.618036  371612 cri.go:89] found id: "76c50ac8eb959c217b4973a9f7c453efe0965e09cddd683d81715e60193be21c"
	I1101 10:42:36.618057  371612 cri.go:89] found id: "0586fe11d7a4a071268a355a4ef698fd26332adf54529b8328586db399b5fffd"
	I1101 10:42:36.618061  371612 cri.go:89] found id: "74bcc4b75d2f266c94d273e33d106a067fc360c72037464209751efb3f223507"
	I1101 10:42:36.618064  371612 cri.go:89] found id: "59db1705ea900c5162380c88e4070278fb31d77328f589bfb883ac36648a8ddd"
	I1101 10:42:36.618066  371612 cri.go:89] found id: "171c4eb221865cdd52d74e6f620b831af6e5394cf091a1eb5a93396a818ccd67"
	I1101 10:42:36.618072  371612 cri.go:89] found id: "84b6025b4eb5c817b09731471e31eff341dd6e3ddafe2af270b933c44ec0b51e"
	I1101 10:42:36.618074  371612 cri.go:89] found id: "fb6589d637b145d192fbf2e4239b9fbb2482d88501af07f042fb1dd618dc43f5"
	I1101 10:42:36.618076  371612 cri.go:89] found id: "6a2b42f9da1f25e14c5acbd289b7642c05c2c582183b501acc14800027b8bcd7"
	I1101 10:42:36.618079  371612 cri.go:89] found id: "d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2"
	I1101 10:42:36.618089  371612 cri.go:89] found id: "4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be"
	I1101 10:42:36.618092  371612 cri.go:89] found id: ""
	I1101 10:42:36.618131  371612 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:42:36.630169  371612 retry.go:31] will retry after 438.186924ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:36Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:37.068636  371612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:37.083305  371612 pause.go:52] kubelet running: false
	I1101 10:42:37.083364  371612 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:42:37.237469  371612 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:42:37.237578  371612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:42:37.315290  371612 cri.go:89] found id: "76c50ac8eb959c217b4973a9f7c453efe0965e09cddd683d81715e60193be21c"
	I1101 10:42:37.315315  371612 cri.go:89] found id: "0586fe11d7a4a071268a355a4ef698fd26332adf54529b8328586db399b5fffd"
	I1101 10:42:37.315320  371612 cri.go:89] found id: "74bcc4b75d2f266c94d273e33d106a067fc360c72037464209751efb3f223507"
	I1101 10:42:37.315326  371612 cri.go:89] found id: "59db1705ea900c5162380c88e4070278fb31d77328f589bfb883ac36648a8ddd"
	I1101 10:42:37.315330  371612 cri.go:89] found id: "171c4eb221865cdd52d74e6f620b831af6e5394cf091a1eb5a93396a818ccd67"
	I1101 10:42:37.315335  371612 cri.go:89] found id: "84b6025b4eb5c817b09731471e31eff341dd6e3ddafe2af270b933c44ec0b51e"
	I1101 10:42:37.315340  371612 cri.go:89] found id: "fb6589d637b145d192fbf2e4239b9fbb2482d88501af07f042fb1dd618dc43f5"
	I1101 10:42:37.315344  371612 cri.go:89] found id: "6a2b42f9da1f25e14c5acbd289b7642c05c2c582183b501acc14800027b8bcd7"
	I1101 10:42:37.315347  371612 cri.go:89] found id: "d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2"
	I1101 10:42:37.315356  371612 cri.go:89] found id: "4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be"
	I1101 10:42:37.315360  371612 cri.go:89] found id: ""
	I1101 10:42:37.315404  371612 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:42:37.330595  371612 out.go:203] 
	W1101 10:42:37.331905  371612 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:42:37.331931  371612 out.go:285] * 
	* 
	W1101 10:42:37.336567  371612 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:42:37.337594  371612 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-753486 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-753486
helpers_test.go:243: (dbg) docker inspect no-preload-753486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83",
	        "Created": "2025-11-01T10:40:35.467852575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 365573,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:41:55.973276216Z",
	            "FinishedAt": "2025-11-01T10:41:55.066086765Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/hostname",
	        "HostsPath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/hosts",
	        "LogPath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83-json.log",
	        "Name": "/no-preload-753486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-753486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-753486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83",
	                "LowerDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-753486",
	                "Source": "/var/lib/docker/volumes/no-preload-753486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-753486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-753486",
	                "name.minikube.sigs.k8s.io": "no-preload-753486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e0eca2946438ba199907cf65fc35b6eb0c4096749251682ffa5c1b919d5ee09",
	            "SandboxKey": "/var/run/docker/netns/0e0eca294643",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-753486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ea:40:52:7e:df",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d84c48ff1a5729254be6ec17799a5aeb1a98c07f8517c94be1c2de332505338",
	                    "EndpointID": "c5d8112f7969734548aa7939a667019f6234c08e152a38ca6fa2d515c852c079",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-753486",
	                        "6be5ddfae7c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753486 -n no-preload-753486
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753486 -n no-preload-753486: exit status 2 (352.714838ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-753486 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-753486 logs -n 25: (1.11635934s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo crio config                                                                                                                                                                                                     │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p custom-flannel-299863                                                                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p disable-driver-mounts-339061                                                                                                                                                                                                               │ disable-driver-mounts-339061 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-707467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p no-preload-753486 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p embed-certs-071527 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p no-preload-753486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:42:12
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:42:12.470667  368496 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:12.470926  368496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:12.470935  368496 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:12.470939  368496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:12.471197  368496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:42:12.471724  368496 out.go:368] Setting JSON to false
	I1101 10:42:12.473007  368496 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8672,"bootTime":1761985060,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:42:12.473093  368496 start.go:143] virtualization: kvm guest
	I1101 10:42:12.475052  368496 out.go:179] * [embed-certs-071527] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:42:12.476242  368496 notify.go:221] Checking for updates...
	I1101 10:42:12.476265  368496 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:42:12.477618  368496 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:42:12.479253  368496 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:12.480396  368496 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:42:12.481804  368496 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:42:12.482907  368496 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:42:12.484696  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:12.485407  368496 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:42:12.510178  368496 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:42:12.510319  368496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:12.566440  368496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:42:12.556585444 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:12.566630  368496 docker.go:319] overlay module found
	I1101 10:42:12.568236  368496 out.go:179] * Using the docker driver based on existing profile
	W1101 10:42:08.135661  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:10.135866  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:12.136114  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	I1101 10:42:12.569580  368496 start.go:309] selected driver: docker
	I1101 10:42:12.569598  368496 start.go:930] validating driver "docker" against &{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:12.569703  368496 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:42:12.570360  368496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:12.629103  368496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:42:12.61946754 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:12.629435  368496 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:12.629475  368496 cni.go:84] Creating CNI manager for ""
	I1101 10:42:12.629562  368496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:12.629618  368496 start.go:353] cluster config:
	{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:12.631965  368496 out.go:179] * Starting "embed-certs-071527" primary control-plane node in "embed-certs-071527" cluster
	I1101 10:42:12.633029  368496 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:42:12.634067  368496 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:42:12.635049  368496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:12.635095  368496 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:42:12.635108  368496 cache.go:59] Caching tarball of preloaded images
	I1101 10:42:12.635157  368496 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:42:12.635206  368496 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:42:12.635218  368496 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:42:12.635307  368496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:42:12.655932  368496 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:42:12.655974  368496 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:42:12.655999  368496 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:42:12.656030  368496 start.go:360] acquireMachinesLock for embed-certs-071527: {Name:mk6e96a90f486564e010d9ea6bfd4c480f872098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:42:12.656092  368496 start.go:364] duration metric: took 43.15µs to acquireMachinesLock for "embed-certs-071527"
	I1101 10:42:12.656114  368496 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:42:12.656125  368496 fix.go:54] fixHost starting: 
	I1101 10:42:12.656377  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:12.675012  368496 fix.go:112] recreateIfNeeded on embed-certs-071527: state=Stopped err=<nil>
	W1101 10:42:12.675043  368496 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:42:09.661111  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:11.661382  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:12.483873  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	W1101 10:42:14.484054  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	I1101 10:42:12.676748  368496 out.go:252] * Restarting existing docker container for "embed-certs-071527" ...
	I1101 10:42:12.676817  368496 cli_runner.go:164] Run: docker start embed-certs-071527
	I1101 10:42:12.931557  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:12.950645  368496 kic.go:430] container "embed-certs-071527" state is running.
	I1101 10:42:12.951070  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:12.969851  368496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:42:12.970221  368496 machine.go:94] provisionDockerMachine start ...
	I1101 10:42:12.970300  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:12.990251  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:12.990557  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:12.990574  368496 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:42:12.991359  368496 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50424->127.0.0.1:33118: read: connection reset by peer
	I1101 10:42:16.134232  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:42:16.134260  368496 ubuntu.go:182] provisioning hostname "embed-certs-071527"
	I1101 10:42:16.134338  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.152535  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:16.152846  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:16.152872  368496 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-071527 && echo "embed-certs-071527" | sudo tee /etc/hostname
	I1101 10:42:16.304442  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:42:16.304550  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.321748  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:16.321964  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:16.321985  368496 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-071527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-071527/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-071527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:42:16.463326  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:42:16.463363  368496 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:42:16.463390  368496 ubuntu.go:190] setting up certificates
	I1101 10:42:16.463404  368496 provision.go:84] configureAuth start
	I1101 10:42:16.463473  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:16.480950  368496 provision.go:143] copyHostCerts
	I1101 10:42:16.481017  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:42:16.481036  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:42:16.481123  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:42:16.481275  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:42:16.481286  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:42:16.481327  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:42:16.481445  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:42:16.481456  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:42:16.481487  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:42:16.481616  368496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-071527 san=[127.0.0.1 192.168.103.2 embed-certs-071527 localhost minikube]
	I1101 10:42:16.916939  368496 provision.go:177] copyRemoteCerts
	I1101 10:42:16.917007  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:42:16.917041  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.934924  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.035944  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:42:17.054849  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:42:17.073166  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:42:17.092130  368496 provision.go:87] duration metric: took 628.710617ms to configureAuth
	I1101 10:42:17.092165  368496 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:42:17.092378  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:17.092532  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.110753  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:17.111008  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:17.111031  368496 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:42:17.409882  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:42:17.409917  368496 machine.go:97] duration metric: took 4.439676339s to provisionDockerMachine
	I1101 10:42:17.409931  368496 start.go:293] postStartSetup for "embed-certs-071527" (driver="docker")
	I1101 10:42:17.409943  368496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:42:17.410023  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:42:17.410075  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.428602  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	W1101 10:42:14.634914  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:16.636505  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:14.161336  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:16.661601  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	I1101 10:42:17.531781  368496 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:42:17.536220  368496 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:42:17.536251  368496 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:42:17.536265  368496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:42:17.536325  368496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:42:17.536436  368496 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:42:17.536597  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:42:17.545281  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:42:17.563349  368496 start.go:296] duration metric: took 153.401996ms for postStartSetup
	I1101 10:42:17.563435  368496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:42:17.563473  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.580861  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.681364  368496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:42:17.686230  368496 fix.go:56] duration metric: took 5.030091922s for fixHost
	I1101 10:42:17.686258  368496 start.go:83] releasing machines lock for "embed-certs-071527", held for 5.030152616s
	I1101 10:42:17.686321  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:17.703788  368496 ssh_runner.go:195] Run: cat /version.json
	I1101 10:42:17.703833  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.703876  368496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:42:17.703957  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.723866  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.723875  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.886271  368496 ssh_runner.go:195] Run: systemctl --version
	I1101 10:42:17.892773  368496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:42:17.929416  368496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:42:17.934199  368496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:42:17.934268  368496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:42:17.942176  368496 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:42:17.942203  368496 start.go:496] detecting cgroup driver to use...
	I1101 10:42:17.942232  368496 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:42:17.942277  368496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:42:17.956846  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:42:17.969926  368496 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:42:17.969984  368496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:42:17.987763  368496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:42:18.000787  368496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:42:18.098750  368496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:42:18.185364  368496 docker.go:234] disabling docker service ...
	I1101 10:42:18.185425  368496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:42:18.200171  368496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:42:18.212245  368496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:42:18.299968  368496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:42:18.389487  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:42:18.402323  368496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:42:18.417595  368496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:42:18.417646  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.426413  368496 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:42:18.426460  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.438201  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.448731  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.457647  368496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:42:18.465716  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.474643  368496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.483603  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.494225  368496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:42:18.503559  368496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:42:18.511049  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:18.598345  368496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:42:18.709217  368496 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:42:18.709288  368496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:42:18.713313  368496 start.go:564] Will wait 60s for crictl version
	I1101 10:42:18.713366  368496 ssh_runner.go:195] Run: which crictl
	I1101 10:42:18.716906  368496 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:42:18.741616  368496 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:42:18.741679  368496 ssh_runner.go:195] Run: crio --version
	I1101 10:42:18.769631  368496 ssh_runner.go:195] Run: crio --version
	I1101 10:42:18.799572  368496 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:42:18.800779  368496 cli_runner.go:164] Run: docker network inspect embed-certs-071527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:42:18.817146  368496 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 10:42:18.821475  368496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:42:18.831787  368496 kubeadm.go:884] updating cluster {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:42:18.831915  368496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:18.831968  368496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:18.866384  368496 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:18.866405  368496 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:42:18.866449  368496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:18.892169  368496 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:18.892192  368496 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:42:18.892200  368496 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 10:42:18.892301  368496 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-071527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:42:18.892380  368496 ssh_runner.go:195] Run: crio config
	I1101 10:42:18.938000  368496 cni.go:84] Creating CNI manager for ""
	I1101 10:42:18.938023  368496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:18.938041  368496 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:42:18.938063  368496 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-071527 NodeName:embed-certs-071527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:42:18.938182  368496 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-071527"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:42:18.938242  368496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:42:18.946826  368496 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:42:18.946897  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:42:18.954801  368496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1101 10:42:18.967590  368496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:42:18.981433  368496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1101 10:42:18.994976  368496 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:42:18.998531  368496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:42:19.009380  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:19.091222  368496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:19.122489  368496 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527 for IP: 192.168.103.2
	I1101 10:42:19.122542  368496 certs.go:195] generating shared ca certs ...
	I1101 10:42:19.122564  368496 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:19.122731  368496 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:42:19.122792  368496 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:42:19.122807  368496 certs.go:257] generating profile certs ...
	I1101 10:42:19.122926  368496 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.key
	I1101 10:42:19.122986  368496 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key.afddc8c1
	I1101 10:42:19.123047  368496 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key
	I1101 10:42:19.123182  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:42:19.123233  368496 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:42:19.123245  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:42:19.123280  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:42:19.123308  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:42:19.123337  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:42:19.123388  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:42:19.124208  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:42:19.146314  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:42:19.168951  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:42:19.192551  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:42:19.220147  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:42:19.245723  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:42:19.268283  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:42:19.289183  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:42:19.311754  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:42:19.333810  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:42:19.356124  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:42:19.377800  368496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:42:19.393408  368496 ssh_runner.go:195] Run: openssl version
	I1101 10:42:19.401003  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:42:19.411579  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.415878  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.415933  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.471208  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:42:19.482043  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:42:19.492517  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.497198  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.497248  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.553784  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:42:19.564362  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:42:19.574902  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.579592  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.579650  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.633944  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:42:19.645552  368496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:42:19.650875  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:42:19.710929  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:42:19.765523  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:42:19.828247  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:42:19.877548  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:42:19.933659  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:42:19.992714  368496 kubeadm.go:401] StartCluster: {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:19.992866  368496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:42:19.992928  368496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:42:20.036018  368496 cri.go:89] found id: "e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44"
	I1101 10:42:20.036180  368496 cri.go:89] found id: "1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b"
	I1101 10:42:20.036188  368496 cri.go:89] found id: "cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f"
	I1101 10:42:20.036193  368496 cri.go:89] found id: "2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d"
	I1101 10:42:20.036197  368496 cri.go:89] found id: ""
	I1101 10:42:20.036250  368496 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:42:20.052319  368496 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:20Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:20.052419  368496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:42:20.064481  368496 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:42:20.064516  368496 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:42:20.064563  368496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:42:20.076775  368496 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:42:20.077819  368496 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-071527" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:20.078753  368496 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-58021/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-071527" cluster setting kubeconfig missing "embed-certs-071527" context setting]
	I1101 10:42:20.079735  368496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.081920  368496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:42:20.093440  368496 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1101 10:42:20.093482  368496 kubeadm.go:602] duration metric: took 28.955359ms to restartPrimaryControlPlane
	I1101 10:42:20.093501  368496 kubeadm.go:403] duration metric: took 100.790269ms to StartCluster
	I1101 10:42:20.093522  368496 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.093670  368496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:20.096021  368496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.096378  368496 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:42:20.096664  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:20.096725  368496 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:42:20.096815  368496 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-071527"
	I1101 10:42:20.096843  368496 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-071527"
	W1101 10:42:20.096857  368496 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:42:20.096891  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.097441  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.097446  368496 addons.go:70] Setting default-storageclass=true in profile "embed-certs-071527"
	I1101 10:42:20.097475  368496 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-071527"
	I1101 10:42:20.097611  368496 addons.go:70] Setting dashboard=true in profile "embed-certs-071527"
	I1101 10:42:20.097644  368496 addons.go:239] Setting addon dashboard=true in "embed-certs-071527"
	W1101 10:42:20.097654  368496 addons.go:248] addon dashboard should already be in state true
	I1101 10:42:20.097688  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.097873  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.098187  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.098234  368496 out.go:179] * Verifying Kubernetes components...
	I1101 10:42:20.102685  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:20.124516  368496 addons.go:239] Setting addon default-storageclass=true in "embed-certs-071527"
	W1101 10:42:20.124543  368496 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:42:20.124572  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.125148  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.126383  368496 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:42:20.126448  368496 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:42:20.127475  368496 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:42:20.127505  368496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:42:20.127560  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.129082  368496 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1101 10:42:16.983584  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	I1101 10:42:17.983724  365377 pod_ready.go:94] pod "coredns-66bc5c9577-6zph7" is "Ready"
	I1101 10:42:17.983754  365377 pod_ready.go:86] duration metric: took 9.505816997s for pod "coredns-66bc5c9577-6zph7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:17.985915  365377 pod_ready.go:83] waiting for pod "etcd-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.491849  365377 pod_ready.go:94] pod "etcd-no-preload-753486" is "Ready"
	I1101 10:42:18.491875  365377 pod_ready.go:86] duration metric: took 505.934613ms for pod "etcd-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.494221  365377 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.498465  365377 pod_ready.go:94] pod "kube-apiserver-no-preload-753486" is "Ready"
	I1101 10:42:18.498489  365377 pod_ready.go:86] duration metric: took 4.246373ms for pod "kube-apiserver-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.500663  365377 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:20.511850  365377 pod_ready.go:104] pod "kube-controller-manager-no-preload-753486" is not "Ready", error: <nil>
	I1101 10:42:20.130030  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:42:20.130050  368496 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:42:20.130125  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.155538  368496 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:42:20.155564  368496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:42:20.155623  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.163671  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.169694  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.191939  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.288119  368496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:20.306159  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:42:20.306194  368496 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:42:20.310206  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:42:20.315991  368496 node_ready.go:35] waiting up to 6m0s for node "embed-certs-071527" to be "Ready" ...
	I1101 10:42:20.325168  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:42:20.333743  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:42:20.333815  368496 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:42:20.355195  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:42:20.355226  368496 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:42:20.378242  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:42:20.378264  368496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:42:20.400055  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:42:20.400089  368496 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:42:20.417257  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:42:20.417297  368496 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:42:20.434766  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:42:20.434792  368496 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:42:20.452816  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:42:20.452852  368496 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:42:20.470856  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:42:20.470887  368496 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:42:20.489267  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:42:21.827019  368496 node_ready.go:49] node "embed-certs-071527" is "Ready"
	I1101 10:42:21.827060  368496 node_ready.go:38] duration metric: took 1.511035582s for node "embed-certs-071527" to be "Ready" ...
	I1101 10:42:21.827077  368496 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:42:21.827147  368496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:42:22.482041  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.17180187s)
	I1101 10:42:22.482106  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.156906265s)
	I1101 10:42:22.482192  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.992885947s)
	I1101 10:42:22.482250  368496 api_server.go:72] duration metric: took 2.385830473s to wait for apiserver process to appear ...
	I1101 10:42:22.482267  368496 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:42:22.482351  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:22.483670  368496 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-071527 addons enable metrics-server
	
	I1101 10:42:22.489684  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:22.489716  368496 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:22.495086  368496 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1101 10:42:19.136186  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:21.136738  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:19.162930  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:21.661978  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	I1101 10:42:22.667611  359640 pod_ready.go:94] pod "coredns-5dd5756b68-9fdk6" is "Ready"
	I1101 10:42:22.667642  359640 pod_ready.go:86] duration metric: took 37.512281759s for pod "coredns-5dd5756b68-9fdk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.672431  359640 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.679390  359640 pod_ready.go:94] pod "etcd-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.679419  359640 pod_ready.go:86] duration metric: took 6.957128ms for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.685128  359640 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.690874  359640 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.690900  359640 pod_ready.go:86] duration metric: took 5.745955ms for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.695536  359640 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.860629  359640 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.860711  359640 pod_ready.go:86] duration metric: took 165.147298ms for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.059741  359640 pod_ready.go:83] waiting for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.459343  359640 pod_ready.go:94] pod "kube-proxy-2pbws" is "Ready"
	I1101 10:42:23.459373  359640 pod_ready.go:86] duration metric: took 399.595768ms for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:23.010300  365377 pod_ready.go:104] pod "kube-controller-manager-no-preload-753486" is not "Ready", error: <nil>
	I1101 10:42:23.507130  365377 pod_ready.go:94] pod "kube-controller-manager-no-preload-753486" is "Ready"
	I1101 10:42:23.507157  365377 pod_ready.go:86] duration metric: took 5.00647596s for pod "kube-controller-manager-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.510616  365377 pod_ready.go:83] waiting for pod "kube-proxy-d5hv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.515189  365377 pod_ready.go:94] pod "kube-proxy-d5hv4" is "Ready"
	I1101 10:42:23.515214  365377 pod_ready.go:86] duration metric: took 4.571417ms for pod "kube-proxy-d5hv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.517263  365377 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.781834  365377 pod_ready.go:94] pod "kube-scheduler-no-preload-753486" is "Ready"
	I1101 10:42:23.781860  365377 pod_ready.go:86] duration metric: took 264.579645ms for pod "kube-scheduler-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.781872  365377 pod_ready.go:40] duration metric: took 15.30754162s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:23.838199  365377 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:42:23.841685  365377 out.go:179] * Done! kubectl is now configured to use "no-preload-753486" cluster and "default" namespace by default
	I1101 10:42:23.660338  359640 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:24.061116  359640 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-707467" is "Ready"
	I1101 10:42:24.061146  359640 pod_ready.go:86] duration metric: took 400.77729ms for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:24.061163  359640 pod_ready.go:40] duration metric: took 38.910389326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:24.128259  359640 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 10:42:24.167901  359640 out.go:203] 
	W1101 10:42:24.180817  359640 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:42:24.182810  359640 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:42:24.187301  359640 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-707467" cluster and "default" namespace by default
	I1101 10:42:22.496547  368496 addons.go:515] duration metric: took 2.399817846s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:42:22.982984  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:22.989290  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:22.989326  368496 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:23.483006  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:23.488530  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 10:42:23.489654  368496 api_server.go:141] control plane version: v1.34.1
	I1101 10:42:23.489681  368496 api_server.go:131] duration metric: took 1.007346794s to wait for apiserver health ...
	I1101 10:42:23.489692  368496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:42:23.493313  368496 system_pods.go:59] 8 kube-system pods found
	I1101 10:42:23.493343  368496 system_pods.go:61] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:23.493350  368496 system_pods.go:61] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:23.493357  368496 system_pods.go:61] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:42:23.493362  368496 system_pods.go:61] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:23.493367  368496 system_pods.go:61] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:23.493374  368496 system_pods.go:61] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:42:23.493378  368496 system_pods.go:61] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:23.493390  368496 system_pods.go:61] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Running
	I1101 10:42:23.493401  368496 system_pods.go:74] duration metric: took 3.702533ms to wait for pod list to return data ...
	I1101 10:42:23.493411  368496 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:42:23.496249  368496 default_sa.go:45] found service account: "default"
	I1101 10:42:23.496271  368496 default_sa.go:55] duration metric: took 2.852113ms for default service account to be created ...
	I1101 10:42:23.496282  368496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:42:23.499163  368496 system_pods.go:86] 8 kube-system pods found
	I1101 10:42:23.499204  368496 system_pods.go:89] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:23.499215  368496 system_pods.go:89] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:23.499233  368496 system_pods.go:89] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:42:23.499243  368496 system_pods.go:89] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:23.499289  368496 system_pods.go:89] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:23.499304  368496 system_pods.go:89] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:42:23.499316  368496 system_pods.go:89] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:23.499322  368496 system_pods.go:89] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Running
	I1101 10:42:23.499332  368496 system_pods.go:126] duration metric: took 3.043029ms to wait for k8s-apps to be running ...
	I1101 10:42:23.499341  368496 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:42:23.499395  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:23.518085  368496 system_svc.go:56] duration metric: took 18.734056ms WaitForService to wait for kubelet
	I1101 10:42:23.518112  368496 kubeadm.go:587] duration metric: took 3.421696433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:23.518132  368496 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:42:23.521173  368496 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:42:23.521202  368496 node_conditions.go:123] node cpu capacity is 8
	I1101 10:42:23.521216  368496 node_conditions.go:105] duration metric: took 3.079009ms to run NodePressure ...
	I1101 10:42:23.521237  368496 start.go:242] waiting for startup goroutines ...
	I1101 10:42:23.521252  368496 start.go:247] waiting for cluster config update ...
	I1101 10:42:23.521272  368496 start.go:256] writing updated cluster config ...
	I1101 10:42:23.521614  368496 ssh_runner.go:195] Run: rm -f paused
	I1101 10:42:23.525820  368496 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:23.530097  368496 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5td8" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:25.535303  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:23.138242  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:25.635848  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:27.536545  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:30.038586  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:27.636001  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:29.636138  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:32.135485  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:32.535642  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:35.035519  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:37.036976  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:34.136045  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:36.635396  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:42:22 no-preload-753486 crio[556]: time="2025-11-01T10:42:22.401330539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:22 no-preload-753486 crio[556]: time="2025-11-01T10:42:22.428103158Z" level=info msg="Created container 4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h/kubernetes-dashboard" id=6cbaf337-1fa6-45ef-a50b-4a5456b89f96 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:22 no-preload-753486 crio[556]: time="2025-11-01T10:42:22.428679679Z" level=info msg="Starting container: 4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be" id=c1b3fc20-9857-4a74-a4d3-78aaee8e006d name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:22 no-preload-753486 crio[556]: time="2025-11-01T10:42:22.430429383Z" level=info msg="Started container" PID=1481 containerID=4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h/kubernetes-dashboard id=c1b3fc20-9857-4a74-a4d3-78aaee8e006d name=/runtime.v1.RuntimeService/StartContainer sandboxID=fabf4702590a207ffdb2c3383e14094bd6066268cbcc3b834033cd6c99d3654a
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.859572304Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=797ae5f7-3c25-4812-be54-32867bcf73b3 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.86020905Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d82ac601-fb09-4ad3-8d88-201417b5b2c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.862772429Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98898194-dba5-44f9-a128-1a57ef4ab860 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.86848294Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=3a9f6f8e-4778-4970-83b1-d75f8316a5d2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.868624383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.875076884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.87557517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.908162793Z" level=info msg="Created container 58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=3a9f6f8e-4778-4970-83b1-d75f8316a5d2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.908803905Z" level=info msg="Starting container: 58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0" id=01948e7a-72d2-44ed-812c-0b25ea0c3c83 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.910376165Z" level=info msg="Started container" PID=1722 containerID=58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper id=01948e7a-72d2-44ed-812c-0b25ea0c3c83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d968a4dbc6f1e6d6e74d8a401376a080b40e8d51859255e4081a88e6a9dbad9f
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.39321788Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d9918bf-aa44-4f2d-a6db-0813c343a214 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.396288096Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a4a43a15-562d-4e0c-af62-9458d6175aae name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.398832613Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=b83be6eb-9f26-4c22-99eb-6416de253a4c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.398961616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.405568736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.406205996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.437526781Z" level=info msg="Created container d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=b83be6eb-9f26-4c22-99eb-6416de253a4c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.4381176Z" level=info msg="Starting container: d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2" id=c8ca1126-99b0-4978-a5fa-dcd61b983a01 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.439759901Z" level=info msg="Started container" PID=1733 containerID=d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper id=c8ca1126-99b0-4978-a5fa-dcd61b983a01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d968a4dbc6f1e6d6e74d8a401376a080b40e8d51859255e4081a88e6a9dbad9f
	Nov 01 10:42:26 no-preload-753486 crio[556]: time="2025-11-01T10:42:26.398065372Z" level=info msg="Removing container: 58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0" id=6bb9c501-78ce-4a70-920b-ecba08d0292f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:26 no-preload-753486 crio[556]: time="2025-11-01T10:42:26.407823454Z" level=info msg="Removed container 58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=6bb9c501-78ce-4a70-920b-ecba08d0292f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d39dca098468c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   1                   d968a4dbc6f1e       dashboard-metrics-scraper-6ffb444bf9-8n5qj   kubernetes-dashboard
	4f7af92332421       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   15 seconds ago      Running             kubernetes-dashboard        0                   fabf4702590a2       kubernetes-dashboard-855c9754f9-8b57h        kubernetes-dashboard
	76c50ac8eb959       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           27 seconds ago      Running             coredns                     0                   b06e0c7f49903       coredns-66bc5c9577-6zph7                     kube-system
	a75bec9613145       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           27 seconds ago      Running             busybox                     1                   b921cd7872b79       busybox                                      default
	0586fe11d7a4a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           30 seconds ago      Running             kindnet-cni                 0                   7af868f8f7ef1       kindnet-dlvlr                                kube-system
	74bcc4b75d2f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           30 seconds ago      Exited              storage-provisioner         0                   eb7c10c050015       storage-provisioner                          kube-system
	59db1705ea900       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           30 seconds ago      Running             kube-proxy                  0                   6cac374637b3e       kube-proxy-d5hv4                             kube-system
	171c4eb221865       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           33 seconds ago      Running             kube-scheduler              0                   8a4a9611fc223       kube-scheduler-no-preload-753486             kube-system
	84b6025b4eb5c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           33 seconds ago      Running             kube-apiserver              0                   a24ff41707d51       kube-apiserver-no-preload-753486             kube-system
	fb6589d637b14       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           33 seconds ago      Running             kube-controller-manager     0                   65afca34f106b       kube-controller-manager-no-preload-753486    kube-system
	6a2b42f9da1f2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           33 seconds ago      Running             etcd                        0                   3c66cfcd271f2       etcd-no-preload-753486                       kube-system
	
	
	==> coredns [76c50ac8eb959c217b4973a9f7c453efe0965e09cddd683d81715e60193be21c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52051 - 60267 "HINFO IN 208790889979226413.1212580520873481645. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.031287692s
	
	
	==> describe nodes <==
	Name:               no-preload-753486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-753486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=no-preload-753486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-753486
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:42:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:42:17 +0000   Sat, 01 Nov 2025 10:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:42:17 +0000   Sat, 01 Nov 2025 10:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:42:17 +0000   Sat, 01 Nov 2025 10:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:42:17 +0000   Sat, 01 Nov 2025 10:42:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-753486
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                cc437131-bcc8-4de4-a901-e5bef9dd6b70
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 coredns-66bc5c9577-6zph7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     87s
	  kube-system                 etcd-no-preload-753486                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         93s
	  kube-system                 kindnet-dlvlr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-no-preload-753486              250m (3%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-controller-manager-no-preload-753486     200m (2%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-proxy-d5hv4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-no-preload-753486              100m (1%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8n5qj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8b57h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 30s                kube-proxy       
	  Normal  NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node no-preload-753486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node no-preload-753486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x8 over 98s)  kubelet          Node no-preload-753486 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     93s                kubelet          Node no-preload-753486 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  93s                kubelet          Node no-preload-753486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                kubelet          Node no-preload-753486 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           88s                node-controller  Node no-preload-753486 event: Registered Node no-preload-753486 in Controller
	  Normal  NodeReady                74s                kubelet          Node no-preload-753486 status is now: NodeReady
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node no-preload-753486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node no-preload-753486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node no-preload-753486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node no-preload-753486 event: Registered Node no-preload-753486 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[Nov 1 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	
	
	==> etcd [6a2b42f9da1f25e14c5acbd289b7642c05c2c582183b501acc14800027b8bcd7] <==
	{"level":"warn","ts":"2025-11-01T10:42:06.297089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.303697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.312445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.319102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.325681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.332115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.339903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.348473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.354616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.361145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.372691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.378892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.385320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.391551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.397921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.405046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.411378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.418071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.431399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.437627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.444353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.460210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.466990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.473531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.527291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38724","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:42:38 up  2:24,  0 user,  load average: 4.49, 3.85, 2.51
	Linux no-preload-753486 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0586fe11d7a4a071268a355a4ef698fd26332adf54529b8328586db399b5fffd] <==
	I1101 10:42:07.787221       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:42:07.787475       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:42:07.787637       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:42:07.787657       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:42:07.787685       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:42:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:42:08.089299       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:42:08.181936       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:42:08.181969       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:42:08.182238       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:42:08.582064       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:42:08.582096       1 metrics.go:72] Registering metrics
	I1101 10:42:08.582178       1 controller.go:711] "Syncing nftables rules"
	I1101 10:42:18.090125       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:18.090244       1 main.go:301] handling current node
	I1101 10:42:28.089678       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:28.089736       1 main.go:301] handling current node
	I1101 10:42:38.098356       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:38.098400       1 main.go:301] handling current node
	
	
	==> kube-apiserver [84b6025b4eb5c817b09731471e31eff341dd6e3ddafe2af270b933c44ec0b51e] <==
	I1101 10:42:06.985245       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:42:06.985254       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:42:06.985263       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:42:06.985270       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:42:06.985276       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:42:06.985175       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:42:06.984986       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:42:06.985003       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:42:06.991185       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 10:42:06.991551       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:42:07.006080       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:42:07.012377       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:42:07.012411       1 policy_source.go:240] refreshing policies
	I1101 10:42:07.013933       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:42:07.219012       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:42:07.246684       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:42:07.263319       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:42:07.270724       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:42:07.277623       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:42:07.312754       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.111.21"}
	I1101 10:42:07.323565       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.128.248"}
	I1101 10:42:07.888348       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:42:10.563766       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:42:10.715091       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:42:10.813571       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fb6589d637b145d192fbf2e4239b9fbb2482d88501af07f042fb1dd618dc43f5] <==
	I1101 10:42:10.312115       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:42:10.312105       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-753486"
	I1101 10:42:10.312176       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:42:10.313177       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:42:10.314431       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:42:10.315644       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:42:10.315700       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:42:10.315751       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:42:10.315764       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:42:10.315771       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:42:10.315795       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:42:10.317025       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:42:10.318216       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:42:10.318241       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:42:10.320597       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:42:10.321936       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:42:10.324192       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:42:10.326437       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:42:10.326454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:42:10.326490       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:42:10.327637       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:42:10.328821       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:42:10.331011       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:42:10.337242       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:42:20.314082       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [59db1705ea900c5162380c88e4070278fb31d77328f589bfb883ac36648a8ddd] <==
	I1101 10:42:07.692097       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:42:07.764290       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:42:07.865288       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:42:07.865332       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:42:07.865558       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:42:07.883889       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:42:07.883954       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:42:07.890369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:42:07.890828       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:42:07.890867       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:07.892433       1 config.go:200] "Starting service config controller"
	I1101 10:42:07.892444       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:42:07.892454       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:42:07.892433       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:42:07.892473       1 config.go:309] "Starting node config controller"
	I1101 10:42:07.892485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:42:07.892506       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:42:07.892476       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:42:07.892873       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:42:07.993092       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:42:07.993150       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:42:07.993156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [171c4eb221865cdd52d74e6f620b831af6e5394cf091a1eb5a93396a818ccd67] <==
	I1101 10:42:05.623061       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:42:06.905851       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:42:06.905892       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:42:06.905923       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:42:06.905934       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:42:06.952861       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:42:06.952896       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:06.955463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:06.955528       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:06.955909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:42:06.956146       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:42:07.055919       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:42:08 no-preload-753486 kubelet[705]: E1101 10:42:08.012622     705 projected.go:196] Error preparing data for projected volume kube-api-access-zh48p for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:08 no-preload-753486 kubelet[705]: E1101 10:42:08.012713     705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dd3f019-b2ff-48ef-871e-baed334b2205-kube-api-access-zh48p podName:9dd3f019-b2ff-48ef-871e-baed334b2205 nodeName:}" failed. No retries permitted until 2025-11-01 10:42:09.012688038 +0000 UTC m=+4.784659610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zh48p" (UniqueName: "kubernetes.io/projected/9dd3f019-b2ff-48ef-871e-baed334b2205-kube-api-access-zh48p") pod "busybox" (UID: "9dd3f019-b2ff-48ef-871e-baed334b2205") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:08 no-preload-753486 kubelet[705]: E1101 10:42:08.917648     705 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 10:42:08 no-preload-753486 kubelet[705]: E1101 10:42:08.917728     705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df6b9a0f-df6b-4830-ad22-495137f60f10-config-volume podName:df6b9a0f-df6b-4830-ad22-495137f60f10 nodeName:}" failed. No retries permitted until 2025-11-01 10:42:10.917712727 +0000 UTC m=+6.689684275 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/df6b9a0f-df6b-4830-ad22-495137f60f10-config-volume") pod "coredns-66bc5c9577-6zph7" (UID: "df6b9a0f-df6b-4830-ad22-495137f60f10") : object "kube-system"/"coredns" not registered
	Nov 01 10:42:09 no-preload-753486 kubelet[705]: E1101 10:42:09.017922     705 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:09 no-preload-753486 kubelet[705]: E1101 10:42:09.017957     705 projected.go:196] Error preparing data for projected volume kube-api-access-zh48p for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:09 no-preload-753486 kubelet[705]: E1101 10:42:09.018020     705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dd3f019-b2ff-48ef-871e-baed334b2205-kube-api-access-zh48p podName:9dd3f019-b2ff-48ef-871e-baed334b2205 nodeName:}" failed. No retries permitted until 2025-11-01 10:42:11.018004815 +0000 UTC m=+6.789976381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zh48p" (UniqueName: "kubernetes.io/projected/9dd3f019-b2ff-48ef-871e-baed334b2205-kube-api-access-zh48p") pod "busybox" (UID: "9dd3f019-b2ff-48ef-871e-baed334b2205") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:16 no-preload-753486 kubelet[705]: I1101 10:42:16.569827     705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:42:17 no-preload-753486 kubelet[705]: I1101 10:42:17.570321     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e5367590-1220-4c14-b08b-645ddae81b56-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8n5qj\" (UID: \"e5367590-1220-4c14-b08b-645ddae81b56\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj"
	Nov 01 10:42:17 no-preload-753486 kubelet[705]: I1101 10:42:17.570390     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr9p9\" (UniqueName: \"kubernetes.io/projected/925c0c6b-e42b-4d12-b067-bbaf38b602ed-kube-api-access-vr9p9\") pod \"kubernetes-dashboard-855c9754f9-8b57h\" (UID: \"925c0c6b-e42b-4d12-b067-bbaf38b602ed\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h"
	Nov 01 10:42:17 no-preload-753486 kubelet[705]: I1101 10:42:17.570455     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sbz6\" (UniqueName: \"kubernetes.io/projected/e5367590-1220-4c14-b08b-645ddae81b56-kube-api-access-6sbz6\") pod \"dashboard-metrics-scraper-6ffb444bf9-8n5qj\" (UID: \"e5367590-1220-4c14-b08b-645ddae81b56\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj"
	Nov 01 10:42:17 no-preload-753486 kubelet[705]: I1101 10:42:17.570546     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/925c0c6b-e42b-4d12-b067-bbaf38b602ed-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8b57h\" (UID: \"925c0c6b-e42b-4d12-b067-bbaf38b602ed\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h"
	Nov 01 10:42:25 no-preload-753486 kubelet[705]: I1101 10:42:25.392779     705 scope.go:117] "RemoveContainer" containerID="58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0"
	Nov 01 10:42:25 no-preload-753486 kubelet[705]: I1101 10:42:25.404000     705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h" podStartSLOduration=10.865028727 podStartE2EDuration="15.403976262s" podCreationTimestamp="2025-11-01 10:42:10 +0000 UTC" firstStartedPulling="2025-11-01 10:42:17.852966179 +0000 UTC m=+13.624937730" lastFinishedPulling="2025-11-01 10:42:22.391913717 +0000 UTC m=+18.163885265" observedRunningTime="2025-11-01 10:42:23.39748315 +0000 UTC m=+19.169454721" watchObservedRunningTime="2025-11-01 10:42:25.403976262 +0000 UTC m=+21.175947833"
	Nov 01 10:42:26 no-preload-753486 kubelet[705]: I1101 10:42:26.396594     705 scope.go:117] "RemoveContainer" containerID="58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0"
	Nov 01 10:42:26 no-preload-753486 kubelet[705]: I1101 10:42:26.396747     705 scope.go:117] "RemoveContainer" containerID="d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2"
	Nov 01 10:42:26 no-preload-753486 kubelet[705]: E1101 10:42:26.396929     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8n5qj_kubernetes-dashboard(e5367590-1220-4c14-b08b-645ddae81b56)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj" podUID="e5367590-1220-4c14-b08b-645ddae81b56"
	Nov 01 10:42:27 no-preload-753486 kubelet[705]: I1101 10:42:27.400581     705 scope.go:117] "RemoveContainer" containerID="d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2"
	Nov 01 10:42:27 no-preload-753486 kubelet[705]: E1101 10:42:27.400761     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8n5qj_kubernetes-dashboard(e5367590-1220-4c14-b08b-645ddae81b56)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj" podUID="e5367590-1220-4c14-b08b-645ddae81b56"
	Nov 01 10:42:28 no-preload-753486 kubelet[705]: I1101 10:42:28.403553     705 scope.go:117] "RemoveContainer" containerID="d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2"
	Nov 01 10:42:28 no-preload-753486 kubelet[705]: E1101 10:42:28.403784     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8n5qj_kubernetes-dashboard(e5367590-1220-4c14-b08b-645ddae81b56)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj" podUID="e5367590-1220-4c14-b08b-645ddae81b56"
	Nov 01 10:42:36 no-preload-753486 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:42:36 no-preload-753486 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:42:36 no-preload-753486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:42:36 no-preload-753486 systemd[1]: kubelet.service: Consumed 1.222s CPU time.
	
	
	==> kubernetes-dashboard [4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be] <==
	2025/11/01 10:42:22 Using namespace: kubernetes-dashboard
	2025/11/01 10:42:22 Using in-cluster config to connect to apiserver
	2025/11/01 10:42:22 Using secret token for csrf signing
	2025/11/01 10:42:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:42:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:42:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:42:22 Generating JWE encryption key
	2025/11/01 10:42:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:42:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:42:23 Initializing JWE encryption key from synchronized object
	2025/11/01 10:42:23 Creating in-cluster Sidecar client
	2025/11/01 10:42:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:42:23 Serving insecurely on HTTP port: 9090
	2025/11/01 10:42:22 Starting overwatch
	
	
	==> storage-provisioner [74bcc4b75d2f266c94d273e33d106a067fc360c72037464209751efb3f223507] <==
	I1101 10:42:07.666857       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:42:37.670016       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753486 -n no-preload-753486
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753486 -n no-preload-753486: exit status 2 (336.476073ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-753486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-753486
helpers_test.go:243: (dbg) docker inspect no-preload-753486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83",
	        "Created": "2025-11-01T10:40:35.467852575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 365573,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:41:55.973276216Z",
	            "FinishedAt": "2025-11-01T10:41:55.066086765Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/hostname",
	        "HostsPath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/hosts",
	        "LogPath": "/var/lib/docker/containers/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83/6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83-json.log",
	        "Name": "/no-preload-753486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-753486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-753486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6be5ddfae7c8a0aeeb7d15b14afe0c6c6e43e1cb73fba03d066d31f713e50b83",
	                "LowerDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cc0dcd6cf1b9bf2cb25a93b0871481cd4ef5d19c0441af5087e2777000b75593/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-753486",
	                "Source": "/var/lib/docker/volumes/no-preload-753486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-753486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-753486",
	                "name.minikube.sigs.k8s.io": "no-preload-753486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e0eca2946438ba199907cf65fc35b6eb0c4096749251682ffa5c1b919d5ee09",
	            "SandboxKey": "/var/run/docker/netns/0e0eca294643",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-753486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ea:40:52:7e:df",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d84c48ff1a5729254be6ec17799a5aeb1a98c07f8517c94be1c2de332505338",
	                    "EndpointID": "c5d8112f7969734548aa7939a667019f6234c08e152a38ca6fa2d515c852c079",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-753486",
	                        "6be5ddfae7c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753486 -n no-preload-753486
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753486 -n no-preload-753486: exit status 2 (345.162587ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-753486 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-753486 logs -n 25: (1.200527592s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo crio config                                                                                                                                                                                                     │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p custom-flannel-299863                                                                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p disable-driver-mounts-339061                                                                                                                                                                                                               │ disable-driver-mounts-339061 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-707467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p no-preload-753486 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p embed-certs-071527 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p no-preload-753486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:42:12
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:42:12.470667  368496 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:12.470926  368496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:12.470935  368496 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:12.470939  368496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:12.471197  368496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:42:12.471724  368496 out.go:368] Setting JSON to false
	I1101 10:42:12.473007  368496 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8672,"bootTime":1761985060,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:42:12.473093  368496 start.go:143] virtualization: kvm guest
	I1101 10:42:12.475052  368496 out.go:179] * [embed-certs-071527] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:42:12.476242  368496 notify.go:221] Checking for updates...
	I1101 10:42:12.476265  368496 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:42:12.477618  368496 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:42:12.479253  368496 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:12.480396  368496 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:42:12.481804  368496 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:42:12.482907  368496 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:42:12.484696  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:12.485407  368496 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:42:12.510178  368496 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:42:12.510319  368496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:12.566440  368496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:42:12.556585444 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:12.566630  368496 docker.go:319] overlay module found
	I1101 10:42:12.568236  368496 out.go:179] * Using the docker driver based on existing profile
	W1101 10:42:08.135661  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:10.135866  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:12.136114  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	I1101 10:42:12.569580  368496 start.go:309] selected driver: docker
	I1101 10:42:12.569598  368496 start.go:930] validating driver "docker" against &{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:12.569703  368496 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:42:12.570360  368496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:12.629103  368496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:42:12.61946754 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:12.629435  368496 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:12.629475  368496 cni.go:84] Creating CNI manager for ""
	I1101 10:42:12.629562  368496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:12.629618  368496 start.go:353] cluster config:
	{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:12.631965  368496 out.go:179] * Starting "embed-certs-071527" primary control-plane node in "embed-certs-071527" cluster
	I1101 10:42:12.633029  368496 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:42:12.634067  368496 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:42:12.635049  368496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:12.635095  368496 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:42:12.635108  368496 cache.go:59] Caching tarball of preloaded images
	I1101 10:42:12.635157  368496 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:42:12.635206  368496 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:42:12.635218  368496 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:42:12.635307  368496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:42:12.655932  368496 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:42:12.655974  368496 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:42:12.655999  368496 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:42:12.656030  368496 start.go:360] acquireMachinesLock for embed-certs-071527: {Name:mk6e96a90f486564e010d9ea6bfd4c480f872098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:42:12.656092  368496 start.go:364] duration metric: took 43.15µs to acquireMachinesLock for "embed-certs-071527"
	I1101 10:42:12.656114  368496 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:42:12.656125  368496 fix.go:54] fixHost starting: 
	I1101 10:42:12.656377  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:12.675012  368496 fix.go:112] recreateIfNeeded on embed-certs-071527: state=Stopped err=<nil>
	W1101 10:42:12.675043  368496 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:42:09.661111  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:11.661382  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:12.483873  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	W1101 10:42:14.484054  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	I1101 10:42:12.676748  368496 out.go:252] * Restarting existing docker container for "embed-certs-071527" ...
	I1101 10:42:12.676817  368496 cli_runner.go:164] Run: docker start embed-certs-071527
	I1101 10:42:12.931557  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:12.950645  368496 kic.go:430] container "embed-certs-071527" state is running.
	I1101 10:42:12.951070  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:12.969851  368496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:42:12.970221  368496 machine.go:94] provisionDockerMachine start ...
	I1101 10:42:12.970300  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:12.990251  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:12.990557  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:12.990574  368496 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:42:12.991359  368496 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50424->127.0.0.1:33118: read: connection reset by peer
	I1101 10:42:16.134232  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:42:16.134260  368496 ubuntu.go:182] provisioning hostname "embed-certs-071527"
	I1101 10:42:16.134338  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.152535  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:16.152846  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:16.152872  368496 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-071527 && echo "embed-certs-071527" | sudo tee /etc/hostname
	I1101 10:42:16.304442  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:42:16.304550  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.321748  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:16.321964  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:16.321985  368496 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-071527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-071527/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-071527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:42:16.463326  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:42:16.463363  368496 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:42:16.463390  368496 ubuntu.go:190] setting up certificates
	I1101 10:42:16.463404  368496 provision.go:84] configureAuth start
	I1101 10:42:16.463473  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:16.480950  368496 provision.go:143] copyHostCerts
	I1101 10:42:16.481017  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:42:16.481036  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:42:16.481123  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:42:16.481275  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:42:16.481286  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:42:16.481327  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:42:16.481445  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:42:16.481456  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:42:16.481487  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:42:16.481616  368496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-071527 san=[127.0.0.1 192.168.103.2 embed-certs-071527 localhost minikube]
	I1101 10:42:16.916939  368496 provision.go:177] copyRemoteCerts
	I1101 10:42:16.917007  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:42:16.917041  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.934924  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.035944  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:42:17.054849  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:42:17.073166  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:42:17.092130  368496 provision.go:87] duration metric: took 628.710617ms to configureAuth
	I1101 10:42:17.092165  368496 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:42:17.092378  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:17.092532  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.110753  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:17.111008  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:17.111031  368496 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:42:17.409882  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:42:17.409917  368496 machine.go:97] duration metric: took 4.439676339s to provisionDockerMachine
	I1101 10:42:17.409931  368496 start.go:293] postStartSetup for "embed-certs-071527" (driver="docker")
	I1101 10:42:17.409943  368496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:42:17.410023  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:42:17.410075  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.428602  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	W1101 10:42:14.634914  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:16.636505  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:14.161336  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:16.661601  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	I1101 10:42:17.531781  368496 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:42:17.536220  368496 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:42:17.536251  368496 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:42:17.536265  368496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:42:17.536325  368496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:42:17.536436  368496 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:42:17.536597  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:42:17.545281  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:42:17.563349  368496 start.go:296] duration metric: took 153.401996ms for postStartSetup
	I1101 10:42:17.563435  368496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:42:17.563473  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.580861  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.681364  368496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:42:17.686230  368496 fix.go:56] duration metric: took 5.030091922s for fixHost
	I1101 10:42:17.686258  368496 start.go:83] releasing machines lock for "embed-certs-071527", held for 5.030152616s
	I1101 10:42:17.686321  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:17.703788  368496 ssh_runner.go:195] Run: cat /version.json
	I1101 10:42:17.703833  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.703876  368496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:42:17.703957  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.723866  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.723875  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.886271  368496 ssh_runner.go:195] Run: systemctl --version
	I1101 10:42:17.892773  368496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:42:17.929416  368496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:42:17.934199  368496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:42:17.934268  368496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:42:17.942176  368496 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:42:17.942203  368496 start.go:496] detecting cgroup driver to use...
	I1101 10:42:17.942232  368496 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:42:17.942277  368496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:42:17.956846  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:42:17.969926  368496 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:42:17.969984  368496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:42:17.987763  368496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:42:18.000787  368496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:42:18.098750  368496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:42:18.185364  368496 docker.go:234] disabling docker service ...
	I1101 10:42:18.185425  368496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:42:18.200171  368496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:42:18.212245  368496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:42:18.299968  368496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:42:18.389487  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:42:18.402323  368496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:42:18.417595  368496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:42:18.417646  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.426413  368496 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:42:18.426460  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.438201  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.448731  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.457647  368496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:42:18.465716  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.474643  368496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.483603  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.494225  368496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:42:18.503559  368496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:42:18.511049  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:18.598345  368496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:42:18.709217  368496 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:42:18.709288  368496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:42:18.713313  368496 start.go:564] Will wait 60s for crictl version
	I1101 10:42:18.713366  368496 ssh_runner.go:195] Run: which crictl
	I1101 10:42:18.716906  368496 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:42:18.741616  368496 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:42:18.741679  368496 ssh_runner.go:195] Run: crio --version
	I1101 10:42:18.769631  368496 ssh_runner.go:195] Run: crio --version
	I1101 10:42:18.799572  368496 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:42:18.800779  368496 cli_runner.go:164] Run: docker network inspect embed-certs-071527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:42:18.817146  368496 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 10:42:18.821475  368496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:42:18.831787  368496 kubeadm.go:884] updating cluster {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:42:18.831915  368496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:18.831968  368496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:18.866384  368496 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:18.866405  368496 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:42:18.866449  368496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:18.892169  368496 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:18.892192  368496 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:42:18.892200  368496 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 10:42:18.892301  368496 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-071527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:42:18.892380  368496 ssh_runner.go:195] Run: crio config
	I1101 10:42:18.938000  368496 cni.go:84] Creating CNI manager for ""
	I1101 10:42:18.938023  368496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:18.938041  368496 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:42:18.938063  368496 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-071527 NodeName:embed-certs-071527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:42:18.938182  368496 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-071527"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:42:18.938242  368496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:42:18.946826  368496 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:42:18.946897  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:42:18.954801  368496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1101 10:42:18.967590  368496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:42:18.981433  368496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1101 10:42:18.994976  368496 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:42:18.998531  368496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:42:19.009380  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:19.091222  368496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:19.122489  368496 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527 for IP: 192.168.103.2
	I1101 10:42:19.122542  368496 certs.go:195] generating shared ca certs ...
	I1101 10:42:19.122564  368496 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:19.122731  368496 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:42:19.122792  368496 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:42:19.122807  368496 certs.go:257] generating profile certs ...
	I1101 10:42:19.122926  368496 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.key
	I1101 10:42:19.122986  368496 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key.afddc8c1
	I1101 10:42:19.123047  368496 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key
	I1101 10:42:19.123182  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:42:19.123233  368496 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:42:19.123245  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:42:19.123280  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:42:19.123308  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:42:19.123337  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:42:19.123388  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:42:19.124208  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:42:19.146314  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:42:19.168951  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:42:19.192551  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:42:19.220147  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:42:19.245723  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:42:19.268283  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:42:19.289183  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:42:19.311754  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:42:19.333810  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:42:19.356124  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:42:19.377800  368496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:42:19.393408  368496 ssh_runner.go:195] Run: openssl version
	I1101 10:42:19.401003  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:42:19.411579  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.415878  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.415933  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.471208  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:42:19.482043  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:42:19.492517  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.497198  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.497248  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.553784  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:42:19.564362  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:42:19.574902  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.579592  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.579650  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.633944  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:42:19.645552  368496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:42:19.650875  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:42:19.710929  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:42:19.765523  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:42:19.828247  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:42:19.877548  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:42:19.933659  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:42:19.992714  368496 kubeadm.go:401] StartCluster: {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:19.992866  368496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:42:19.992928  368496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:42:20.036018  368496 cri.go:89] found id: "e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44"
	I1101 10:42:20.036180  368496 cri.go:89] found id: "1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b"
	I1101 10:42:20.036188  368496 cri.go:89] found id: "cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f"
	I1101 10:42:20.036193  368496 cri.go:89] found id: "2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d"
	I1101 10:42:20.036197  368496 cri.go:89] found id: ""
	I1101 10:42:20.036250  368496 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:42:20.052319  368496 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:20Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:20.052419  368496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:42:20.064481  368496 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:42:20.064516  368496 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:42:20.064563  368496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:42:20.076775  368496 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:42:20.077819  368496 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-071527" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:20.078753  368496 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-58021/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-071527" cluster setting kubeconfig missing "embed-certs-071527" context setting]
	I1101 10:42:20.079735  368496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.081920  368496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:42:20.093440  368496 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1101 10:42:20.093482  368496 kubeadm.go:602] duration metric: took 28.955359ms to restartPrimaryControlPlane
	I1101 10:42:20.093501  368496 kubeadm.go:403] duration metric: took 100.790269ms to StartCluster
	I1101 10:42:20.093522  368496 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.093670  368496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:20.096021  368496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.096378  368496 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:42:20.096664  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:20.096725  368496 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:42:20.096815  368496 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-071527"
	I1101 10:42:20.096843  368496 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-071527"
	W1101 10:42:20.096857  368496 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:42:20.096891  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.097441  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.097446  368496 addons.go:70] Setting default-storageclass=true in profile "embed-certs-071527"
	I1101 10:42:20.097475  368496 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-071527"
	I1101 10:42:20.097611  368496 addons.go:70] Setting dashboard=true in profile "embed-certs-071527"
	I1101 10:42:20.097644  368496 addons.go:239] Setting addon dashboard=true in "embed-certs-071527"
	W1101 10:42:20.097654  368496 addons.go:248] addon dashboard should already be in state true
	I1101 10:42:20.097688  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.097873  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.098187  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.098234  368496 out.go:179] * Verifying Kubernetes components...
	I1101 10:42:20.102685  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:20.124516  368496 addons.go:239] Setting addon default-storageclass=true in "embed-certs-071527"
	W1101 10:42:20.124543  368496 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:42:20.124572  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.125148  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.126383  368496 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:42:20.126448  368496 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:42:20.127475  368496 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:42:20.127505  368496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:42:20.127560  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.129082  368496 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1101 10:42:16.983584  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	I1101 10:42:17.983724  365377 pod_ready.go:94] pod "coredns-66bc5c9577-6zph7" is "Ready"
	I1101 10:42:17.983754  365377 pod_ready.go:86] duration metric: took 9.505816997s for pod "coredns-66bc5c9577-6zph7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:17.985915  365377 pod_ready.go:83] waiting for pod "etcd-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.491849  365377 pod_ready.go:94] pod "etcd-no-preload-753486" is "Ready"
	I1101 10:42:18.491875  365377 pod_ready.go:86] duration metric: took 505.934613ms for pod "etcd-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.494221  365377 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.498465  365377 pod_ready.go:94] pod "kube-apiserver-no-preload-753486" is "Ready"
	I1101 10:42:18.498489  365377 pod_ready.go:86] duration metric: took 4.246373ms for pod "kube-apiserver-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.500663  365377 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:20.511850  365377 pod_ready.go:104] pod "kube-controller-manager-no-preload-753486" is not "Ready", error: <nil>
	I1101 10:42:20.130030  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:42:20.130050  368496 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:42:20.130125  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.155538  368496 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:42:20.155564  368496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:42:20.155623  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.163671  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.169694  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.191939  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.288119  368496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:20.306159  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:42:20.306194  368496 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:42:20.310206  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:42:20.315991  368496 node_ready.go:35] waiting up to 6m0s for node "embed-certs-071527" to be "Ready" ...
	I1101 10:42:20.325168  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:42:20.333743  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:42:20.333815  368496 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:42:20.355195  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:42:20.355226  368496 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:42:20.378242  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:42:20.378264  368496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:42:20.400055  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:42:20.400089  368496 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:42:20.417257  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:42:20.417297  368496 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:42:20.434766  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:42:20.434792  368496 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:42:20.452816  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:42:20.452852  368496 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:42:20.470856  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:42:20.470887  368496 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:42:20.489267  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:42:21.827019  368496 node_ready.go:49] node "embed-certs-071527" is "Ready"
	I1101 10:42:21.827060  368496 node_ready.go:38] duration metric: took 1.511035582s for node "embed-certs-071527" to be "Ready" ...
	I1101 10:42:21.827077  368496 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:42:21.827147  368496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:42:22.482041  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.17180187s)
	I1101 10:42:22.482106  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.156906265s)
	I1101 10:42:22.482192  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.992885947s)
	I1101 10:42:22.482250  368496 api_server.go:72] duration metric: took 2.385830473s to wait for apiserver process to appear ...
	I1101 10:42:22.482267  368496 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:42:22.482351  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:22.483670  368496 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-071527 addons enable metrics-server
	
	I1101 10:42:22.489684  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:22.489716  368496 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:22.495086  368496 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1101 10:42:19.136186  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:21.136738  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:19.162930  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:21.661978  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	I1101 10:42:22.667611  359640 pod_ready.go:94] pod "coredns-5dd5756b68-9fdk6" is "Ready"
	I1101 10:42:22.667642  359640 pod_ready.go:86] duration metric: took 37.512281759s for pod "coredns-5dd5756b68-9fdk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.672431  359640 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.679390  359640 pod_ready.go:94] pod "etcd-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.679419  359640 pod_ready.go:86] duration metric: took 6.957128ms for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.685128  359640 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.690874  359640 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.690900  359640 pod_ready.go:86] duration metric: took 5.745955ms for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.695536  359640 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.860629  359640 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.860711  359640 pod_ready.go:86] duration metric: took 165.147298ms for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.059741  359640 pod_ready.go:83] waiting for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.459343  359640 pod_ready.go:94] pod "kube-proxy-2pbws" is "Ready"
	I1101 10:42:23.459373  359640 pod_ready.go:86] duration metric: took 399.595768ms for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:23.010300  365377 pod_ready.go:104] pod "kube-controller-manager-no-preload-753486" is not "Ready", error: <nil>
	I1101 10:42:23.507130  365377 pod_ready.go:94] pod "kube-controller-manager-no-preload-753486" is "Ready"
	I1101 10:42:23.507157  365377 pod_ready.go:86] duration metric: took 5.00647596s for pod "kube-controller-manager-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.510616  365377 pod_ready.go:83] waiting for pod "kube-proxy-d5hv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.515189  365377 pod_ready.go:94] pod "kube-proxy-d5hv4" is "Ready"
	I1101 10:42:23.515214  365377 pod_ready.go:86] duration metric: took 4.571417ms for pod "kube-proxy-d5hv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.517263  365377 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.781834  365377 pod_ready.go:94] pod "kube-scheduler-no-preload-753486" is "Ready"
	I1101 10:42:23.781860  365377 pod_ready.go:86] duration metric: took 264.579645ms for pod "kube-scheduler-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.781872  365377 pod_ready.go:40] duration metric: took 15.30754162s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:23.838199  365377 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:42:23.841685  365377 out.go:179] * Done! kubectl is now configured to use "no-preload-753486" cluster and "default" namespace by default
	I1101 10:42:23.660338  359640 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:24.061116  359640 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-707467" is "Ready"
	I1101 10:42:24.061146  359640 pod_ready.go:86] duration metric: took 400.77729ms for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:24.061163  359640 pod_ready.go:40] duration metric: took 38.910389326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:24.128259  359640 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 10:42:24.167901  359640 out.go:203] 
	W1101 10:42:24.180817  359640 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:42:24.182810  359640 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:42:24.187301  359640 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-707467" cluster and "default" namespace by default
	I1101 10:42:22.496547  368496 addons.go:515] duration metric: took 2.399817846s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:42:22.982984  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:22.989290  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:22.989326  368496 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:23.483006  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:23.488530  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 10:42:23.489654  368496 api_server.go:141] control plane version: v1.34.1
	I1101 10:42:23.489681  368496 api_server.go:131] duration metric: took 1.007346794s to wait for apiserver health ...
	I1101 10:42:23.489692  368496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:42:23.493313  368496 system_pods.go:59] 8 kube-system pods found
	I1101 10:42:23.493343  368496 system_pods.go:61] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:23.493350  368496 system_pods.go:61] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:23.493357  368496 system_pods.go:61] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:42:23.493362  368496 system_pods.go:61] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:23.493367  368496 system_pods.go:61] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:23.493374  368496 system_pods.go:61] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:42:23.493378  368496 system_pods.go:61] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:23.493390  368496 system_pods.go:61] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Running
	I1101 10:42:23.493401  368496 system_pods.go:74] duration metric: took 3.702533ms to wait for pod list to return data ...
	I1101 10:42:23.493411  368496 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:42:23.496249  368496 default_sa.go:45] found service account: "default"
	I1101 10:42:23.496271  368496 default_sa.go:55] duration metric: took 2.852113ms for default service account to be created ...
	I1101 10:42:23.496282  368496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:42:23.499163  368496 system_pods.go:86] 8 kube-system pods found
	I1101 10:42:23.499204  368496 system_pods.go:89] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:23.499215  368496 system_pods.go:89] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:23.499233  368496 system_pods.go:89] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:42:23.499243  368496 system_pods.go:89] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:23.499289  368496 system_pods.go:89] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:23.499304  368496 system_pods.go:89] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:42:23.499316  368496 system_pods.go:89] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:23.499322  368496 system_pods.go:89] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Running
	I1101 10:42:23.499332  368496 system_pods.go:126] duration metric: took 3.043029ms to wait for k8s-apps to be running ...
	I1101 10:42:23.499341  368496 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:42:23.499395  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:23.518085  368496 system_svc.go:56] duration metric: took 18.734056ms WaitForService to wait for kubelet
	I1101 10:42:23.518112  368496 kubeadm.go:587] duration metric: took 3.421696433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:23.518132  368496 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:42:23.521173  368496 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:42:23.521202  368496 node_conditions.go:123] node cpu capacity is 8
	I1101 10:42:23.521216  368496 node_conditions.go:105] duration metric: took 3.079009ms to run NodePressure ...
	I1101 10:42:23.521237  368496 start.go:242] waiting for startup goroutines ...
	I1101 10:42:23.521252  368496 start.go:247] waiting for cluster config update ...
	I1101 10:42:23.521272  368496 start.go:256] writing updated cluster config ...
	I1101 10:42:23.521614  368496 ssh_runner.go:195] Run: rm -f paused
	I1101 10:42:23.525820  368496 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:23.530097  368496 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5td8" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:25.535303  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:23.138242  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:25.635848  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:27.536545  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:30.038586  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:27.636001  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:29.636138  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:32.135485  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:32.535642  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:35.035519  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:37.036976  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:34.136045  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:36.635396  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:42:22 no-preload-753486 crio[556]: time="2025-11-01T10:42:22.401330539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:22 no-preload-753486 crio[556]: time="2025-11-01T10:42:22.428103158Z" level=info msg="Created container 4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h/kubernetes-dashboard" id=6cbaf337-1fa6-45ef-a50b-4a5456b89f96 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:22 no-preload-753486 crio[556]: time="2025-11-01T10:42:22.428679679Z" level=info msg="Starting container: 4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be" id=c1b3fc20-9857-4a74-a4d3-78aaee8e006d name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:22 no-preload-753486 crio[556]: time="2025-11-01T10:42:22.430429383Z" level=info msg="Started container" PID=1481 containerID=4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h/kubernetes-dashboard id=c1b3fc20-9857-4a74-a4d3-78aaee8e006d name=/runtime.v1.RuntimeService/StartContainer sandboxID=fabf4702590a207ffdb2c3383e14094bd6066268cbcc3b834033cd6c99d3654a
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.859572304Z" level=info msg="Pulled image: registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" id=797ae5f7-3c25-4812-be54-32867bcf73b3 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.86020905Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d82ac601-fb09-4ad3-8d88-201417b5b2c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.862772429Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98898194-dba5-44f9-a128-1a57ef4ab860 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.86848294Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=3a9f6f8e-4778-4970-83b1-d75f8316a5d2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.868624383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.875076884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.87557517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.908162793Z" level=info msg="Created container 58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=3a9f6f8e-4778-4970-83b1-d75f8316a5d2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.908803905Z" level=info msg="Starting container: 58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0" id=01948e7a-72d2-44ed-812c-0b25ea0c3c83 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:24 no-preload-753486 crio[556]: time="2025-11-01T10:42:24.910376165Z" level=info msg="Started container" PID=1722 containerID=58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper id=01948e7a-72d2-44ed-812c-0b25ea0c3c83 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d968a4dbc6f1e6d6e74d8a401376a080b40e8d51859255e4081a88e6a9dbad9f
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.39321788Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8d9918bf-aa44-4f2d-a6db-0813c343a214 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.396288096Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a4a43a15-562d-4e0c-af62-9458d6175aae name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.398832613Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=b83be6eb-9f26-4c22-99eb-6416de253a4c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.398961616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.405568736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.406205996Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.437526781Z" level=info msg="Created container d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=b83be6eb-9f26-4c22-99eb-6416de253a4c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.4381176Z" level=info msg="Starting container: d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2" id=c8ca1126-99b0-4978-a5fa-dcd61b983a01 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:25 no-preload-753486 crio[556]: time="2025-11-01T10:42:25.439759901Z" level=info msg="Started container" PID=1733 containerID=d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper id=c8ca1126-99b0-4978-a5fa-dcd61b983a01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d968a4dbc6f1e6d6e74d8a401376a080b40e8d51859255e4081a88e6a9dbad9f
	Nov 01 10:42:26 no-preload-753486 crio[556]: time="2025-11-01T10:42:26.398065372Z" level=info msg="Removing container: 58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0" id=6bb9c501-78ce-4a70-920b-ecba08d0292f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:26 no-preload-753486 crio[556]: time="2025-11-01T10:42:26.407823454Z" level=info msg="Removed container 58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj/dashboard-metrics-scraper" id=6bb9c501-78ce-4a70-920b-ecba08d0292f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d39dca098468c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   1                   d968a4dbc6f1e       dashboard-metrics-scraper-6ffb444bf9-8n5qj   kubernetes-dashboard
	4f7af92332421       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   17 seconds ago      Running             kubernetes-dashboard        0                   fabf4702590a2       kubernetes-dashboard-855c9754f9-8b57h        kubernetes-dashboard
	76c50ac8eb959       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           29 seconds ago      Running             coredns                     0                   b06e0c7f49903       coredns-66bc5c9577-6zph7                     kube-system
	a75bec9613145       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           29 seconds ago      Running             busybox                     1                   b921cd7872b79       busybox                                      default
	0586fe11d7a4a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           32 seconds ago      Running             kindnet-cni                 0                   7af868f8f7ef1       kindnet-dlvlr                                kube-system
	74bcc4b75d2f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           32 seconds ago      Exited              storage-provisioner         0                   eb7c10c050015       storage-provisioner                          kube-system
	59db1705ea900       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           32 seconds ago      Running             kube-proxy                  0                   6cac374637b3e       kube-proxy-d5hv4                             kube-system
	171c4eb221865       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           35 seconds ago      Running             kube-scheduler              0                   8a4a9611fc223       kube-scheduler-no-preload-753486             kube-system
	84b6025b4eb5c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           35 seconds ago      Running             kube-apiserver              0                   a24ff41707d51       kube-apiserver-no-preload-753486             kube-system
	fb6589d637b14       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           35 seconds ago      Running             kube-controller-manager     0                   65afca34f106b       kube-controller-manager-no-preload-753486    kube-system
	6a2b42f9da1f2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           35 seconds ago      Running             etcd                        0                   3c66cfcd271f2       etcd-no-preload-753486                       kube-system
	
	
	==> coredns [76c50ac8eb959c217b4973a9f7c453efe0965e09cddd683d81715e60193be21c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52051 - 60267 "HINFO IN 208790889979226413.1212580520873481645. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.031287692s
	
	
	==> describe nodes <==
	Name:               no-preload-753486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-753486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=no-preload-753486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-753486
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:42:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:42:17 +0000   Sat, 01 Nov 2025 10:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:42:17 +0000   Sat, 01 Nov 2025 10:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:42:17 +0000   Sat, 01 Nov 2025 10:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:42:17 +0000   Sat, 01 Nov 2025 10:42:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-753486
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                cc437131-bcc8-4de4-a901-e5bef9dd6b70
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 coredns-66bc5c9577-6zph7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     89s
	  kube-system                 etcd-no-preload-753486                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         95s
	  kube-system                 kindnet-dlvlr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-no-preload-753486              250m (3%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-no-preload-753486     200m (2%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-proxy-d5hv4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-no-preload-753486              100m (1%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8n5qj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8b57h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 88s                  kube-proxy       
	  Normal  Starting                 32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  100s (x8 over 100s)  kubelet          Node no-preload-753486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x8 over 100s)  kubelet          Node no-preload-753486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x8 over 100s)  kubelet          Node no-preload-753486 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     95s                  kubelet          Node no-preload-753486 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  95s                  kubelet          Node no-preload-753486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s                  kubelet          Node no-preload-753486 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 95s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           90s                  node-controller  Node no-preload-753486 event: Registered Node no-preload-753486 in Controller
	  Normal  NodeReady                76s                  kubelet          Node no-preload-753486 status is now: NodeReady
	  Normal  Starting                 36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)    kubelet          Node no-preload-753486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)    kubelet          Node no-preload-753486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)    kubelet          Node no-preload-753486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                  node-controller  Node no-preload-753486 event: Registered Node no-preload-753486 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[Nov 1 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	
	
	==> etcd [6a2b42f9da1f25e14c5acbd289b7642c05c2c582183b501acc14800027b8bcd7] <==
	{"level":"warn","ts":"2025-11-01T10:42:06.297089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.303697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.312445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.319102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.325681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.332115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.339903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.348473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.354616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.361145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.372691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.378892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.385320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.391551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.397921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.405046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.411378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.418071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.431399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.437627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.444353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.460210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.466990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.473531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:06.527291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38724","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:42:40 up  2:24,  0 user,  load average: 4.49, 3.85, 2.51
	Linux no-preload-753486 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0586fe11d7a4a071268a355a4ef698fd26332adf54529b8328586db399b5fffd] <==
	I1101 10:42:07.787221       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:42:07.787475       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:42:07.787637       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:42:07.787657       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:42:07.787685       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:42:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:42:08.089299       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:42:08.181936       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:42:08.181969       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:42:08.182238       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:42:08.582064       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:42:08.582096       1 metrics.go:72] Registering metrics
	I1101 10:42:08.582178       1 controller.go:711] "Syncing nftables rules"
	I1101 10:42:18.090125       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:18.090244       1 main.go:301] handling current node
	I1101 10:42:28.089678       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:28.089736       1 main.go:301] handling current node
	I1101 10:42:38.098356       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 10:42:38.098400       1 main.go:301] handling current node
	
	
	==> kube-apiserver [84b6025b4eb5c817b09731471e31eff341dd6e3ddafe2af270b933c44ec0b51e] <==
	I1101 10:42:06.985245       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:42:06.985254       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:42:06.985263       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:42:06.985270       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:42:06.985276       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:42:06.985175       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:42:06.984986       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:42:06.985003       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:42:06.991185       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 10:42:06.991551       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:42:07.006080       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:42:07.012377       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:42:07.012411       1 policy_source.go:240] refreshing policies
	I1101 10:42:07.013933       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:42:07.219012       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:42:07.246684       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:42:07.263319       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:42:07.270724       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:42:07.277623       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:42:07.312754       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.111.21"}
	I1101 10:42:07.323565       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.128.248"}
	I1101 10:42:07.888348       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:42:10.563766       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:42:10.715091       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:42:10.813571       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fb6589d637b145d192fbf2e4239b9fbb2482d88501af07f042fb1dd618dc43f5] <==
	I1101 10:42:10.312115       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:42:10.312105       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-753486"
	I1101 10:42:10.312176       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:42:10.313177       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:42:10.314431       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:42:10.315644       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:42:10.315700       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:42:10.315751       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:42:10.315764       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:42:10.315771       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:42:10.315795       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:42:10.317025       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:42:10.318216       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:42:10.318241       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:42:10.320597       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:42:10.321936       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:42:10.324192       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:42:10.326437       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:42:10.326454       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:42:10.326490       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:42:10.327637       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:42:10.328821       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:42:10.331011       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:42:10.337242       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:42:20.314082       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [59db1705ea900c5162380c88e4070278fb31d77328f589bfb883ac36648a8ddd] <==
	I1101 10:42:07.692097       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:42:07.764290       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:42:07.865288       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:42:07.865332       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:42:07.865558       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:42:07.883889       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:42:07.883954       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:42:07.890369       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:42:07.890828       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:42:07.890867       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:07.892433       1 config.go:200] "Starting service config controller"
	I1101 10:42:07.892444       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:42:07.892454       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:42:07.892433       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:42:07.892473       1 config.go:309] "Starting node config controller"
	I1101 10:42:07.892485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:42:07.892506       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:42:07.892476       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:42:07.892873       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:42:07.993092       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:42:07.993150       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:42:07.993156       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [171c4eb221865cdd52d74e6f620b831af6e5394cf091a1eb5a93396a818ccd67] <==
	I1101 10:42:05.623061       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:42:06.905851       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:42:06.905892       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:42:06.905923       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:42:06.905934       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:42:06.952861       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:42:06.952896       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:06.955463       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:06.955528       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:06.955909       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:42:06.956146       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:42:07.055919       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:42:08 no-preload-753486 kubelet[705]: E1101 10:42:08.012622     705 projected.go:196] Error preparing data for projected volume kube-api-access-zh48p for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:08 no-preload-753486 kubelet[705]: E1101 10:42:08.012713     705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dd3f019-b2ff-48ef-871e-baed334b2205-kube-api-access-zh48p podName:9dd3f019-b2ff-48ef-871e-baed334b2205 nodeName:}" failed. No retries permitted until 2025-11-01 10:42:09.012688038 +0000 UTC m=+4.784659610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zh48p" (UniqueName: "kubernetes.io/projected/9dd3f019-b2ff-48ef-871e-baed334b2205-kube-api-access-zh48p") pod "busybox" (UID: "9dd3f019-b2ff-48ef-871e-baed334b2205") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:08 no-preload-753486 kubelet[705]: E1101 10:42:08.917648     705 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 10:42:08 no-preload-753486 kubelet[705]: E1101 10:42:08.917728     705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df6b9a0f-df6b-4830-ad22-495137f60f10-config-volume podName:df6b9a0f-df6b-4830-ad22-495137f60f10 nodeName:}" failed. No retries permitted until 2025-11-01 10:42:10.917712727 +0000 UTC m=+6.689684275 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/df6b9a0f-df6b-4830-ad22-495137f60f10-config-volume") pod "coredns-66bc5c9577-6zph7" (UID: "df6b9a0f-df6b-4830-ad22-495137f60f10") : object "kube-system"/"coredns" not registered
	Nov 01 10:42:09 no-preload-753486 kubelet[705]: E1101 10:42:09.017922     705 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:09 no-preload-753486 kubelet[705]: E1101 10:42:09.017957     705 projected.go:196] Error preparing data for projected volume kube-api-access-zh48p for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:09 no-preload-753486 kubelet[705]: E1101 10:42:09.018020     705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dd3f019-b2ff-48ef-871e-baed334b2205-kube-api-access-zh48p podName:9dd3f019-b2ff-48ef-871e-baed334b2205 nodeName:}" failed. No retries permitted until 2025-11-01 10:42:11.018004815 +0000 UTC m=+6.789976381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zh48p" (UniqueName: "kubernetes.io/projected/9dd3f019-b2ff-48ef-871e-baed334b2205-kube-api-access-zh48p") pod "busybox" (UID: "9dd3f019-b2ff-48ef-871e-baed334b2205") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 10:42:16 no-preload-753486 kubelet[705]: I1101 10:42:16.569827     705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:42:17 no-preload-753486 kubelet[705]: I1101 10:42:17.570321     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e5367590-1220-4c14-b08b-645ddae81b56-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8n5qj\" (UID: \"e5367590-1220-4c14-b08b-645ddae81b56\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj"
	Nov 01 10:42:17 no-preload-753486 kubelet[705]: I1101 10:42:17.570390     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr9p9\" (UniqueName: \"kubernetes.io/projected/925c0c6b-e42b-4d12-b067-bbaf38b602ed-kube-api-access-vr9p9\") pod \"kubernetes-dashboard-855c9754f9-8b57h\" (UID: \"925c0c6b-e42b-4d12-b067-bbaf38b602ed\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h"
	Nov 01 10:42:17 no-preload-753486 kubelet[705]: I1101 10:42:17.570455     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sbz6\" (UniqueName: \"kubernetes.io/projected/e5367590-1220-4c14-b08b-645ddae81b56-kube-api-access-6sbz6\") pod \"dashboard-metrics-scraper-6ffb444bf9-8n5qj\" (UID: \"e5367590-1220-4c14-b08b-645ddae81b56\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj"
	Nov 01 10:42:17 no-preload-753486 kubelet[705]: I1101 10:42:17.570546     705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/925c0c6b-e42b-4d12-b067-bbaf38b602ed-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8b57h\" (UID: \"925c0c6b-e42b-4d12-b067-bbaf38b602ed\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h"
	Nov 01 10:42:25 no-preload-753486 kubelet[705]: I1101 10:42:25.392779     705 scope.go:117] "RemoveContainer" containerID="58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0"
	Nov 01 10:42:25 no-preload-753486 kubelet[705]: I1101 10:42:25.404000     705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8b57h" podStartSLOduration=10.865028727 podStartE2EDuration="15.403976262s" podCreationTimestamp="2025-11-01 10:42:10 +0000 UTC" firstStartedPulling="2025-11-01 10:42:17.852966179 +0000 UTC m=+13.624937730" lastFinishedPulling="2025-11-01 10:42:22.391913717 +0000 UTC m=+18.163885265" observedRunningTime="2025-11-01 10:42:23.39748315 +0000 UTC m=+19.169454721" watchObservedRunningTime="2025-11-01 10:42:25.403976262 +0000 UTC m=+21.175947833"
	Nov 01 10:42:26 no-preload-753486 kubelet[705]: I1101 10:42:26.396594     705 scope.go:117] "RemoveContainer" containerID="58732ef128f70e8d0b0cbf4b45cc9a9d729609740c27c65090c889e385d857d0"
	Nov 01 10:42:26 no-preload-753486 kubelet[705]: I1101 10:42:26.396747     705 scope.go:117] "RemoveContainer" containerID="d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2"
	Nov 01 10:42:26 no-preload-753486 kubelet[705]: E1101 10:42:26.396929     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8n5qj_kubernetes-dashboard(e5367590-1220-4c14-b08b-645ddae81b56)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj" podUID="e5367590-1220-4c14-b08b-645ddae81b56"
	Nov 01 10:42:27 no-preload-753486 kubelet[705]: I1101 10:42:27.400581     705 scope.go:117] "RemoveContainer" containerID="d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2"
	Nov 01 10:42:27 no-preload-753486 kubelet[705]: E1101 10:42:27.400761     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8n5qj_kubernetes-dashboard(e5367590-1220-4c14-b08b-645ddae81b56)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj" podUID="e5367590-1220-4c14-b08b-645ddae81b56"
	Nov 01 10:42:28 no-preload-753486 kubelet[705]: I1101 10:42:28.403553     705 scope.go:117] "RemoveContainer" containerID="d39dca098468c271a5cc8494cdbb1e7338f20180680a44da2112e9ed41a882a2"
	Nov 01 10:42:28 no-preload-753486 kubelet[705]: E1101 10:42:28.403784     705 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8n5qj_kubernetes-dashboard(e5367590-1220-4c14-b08b-645ddae81b56)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8n5qj" podUID="e5367590-1220-4c14-b08b-645ddae81b56"
	Nov 01 10:42:36 no-preload-753486 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:42:36 no-preload-753486 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:42:36 no-preload-753486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:42:36 no-preload-753486 systemd[1]: kubelet.service: Consumed 1.222s CPU time.
	
	
	==> kubernetes-dashboard [4f7af923324210855ae4649244938c5eb1bd3a5f07cf8a7189ce4721a8fd57be] <==
	2025/11/01 10:42:22 Starting overwatch
	2025/11/01 10:42:22 Using namespace: kubernetes-dashboard
	2025/11/01 10:42:22 Using in-cluster config to connect to apiserver
	2025/11/01 10:42:22 Using secret token for csrf signing
	2025/11/01 10:42:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:42:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:42:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:42:22 Generating JWE encryption key
	2025/11/01 10:42:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:42:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:42:23 Initializing JWE encryption key from synchronized object
	2025/11/01 10:42:23 Creating in-cluster Sidecar client
	2025/11/01 10:42:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:42:23 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [74bcc4b75d2f266c94d273e33d106a067fc360c72037464209751efb3f223507] <==
	I1101 10:42:07.666857       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:42:37.670016       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753486 -n no-preload-753486
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-753486 -n no-preload-753486: exit status 2 (359.80672ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-753486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-707467 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-707467 --alsologtostderr -v=1: exit status 80 (2.3757374s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-707467 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:42:36.999114  371886 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:36.999366  371886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:36.999374  371886 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:36.999378  371886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:36.999554  371886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:42:36.999748  371886 out.go:368] Setting JSON to false
	I1101 10:42:36.999765  371886 mustload.go:66] Loading cluster: old-k8s-version-707467
	I1101 10:42:37.000102  371886 config.go:182] Loaded profile config "old-k8s-version-707467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:42:37.000519  371886 cli_runner.go:164] Run: docker container inspect old-k8s-version-707467 --format={{.State.Status}}
	I1101 10:42:37.017724  371886 host.go:66] Checking if "old-k8s-version-707467" exists ...
	I1101 10:42:37.017970  371886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:37.075891  371886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-01 10:42:37.064946642 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:37.076780  371886 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-707467 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:42:37.078560  371886 out.go:179] * Pausing node old-k8s-version-707467 ... 
	I1101 10:42:37.079653  371886 host.go:66] Checking if "old-k8s-version-707467" exists ...
	I1101 10:42:37.079919  371886 ssh_runner.go:195] Run: systemctl --version
	I1101 10:42:37.079995  371886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-707467
	I1101 10:42:37.098732  371886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/old-k8s-version-707467/id_rsa Username:docker}
	I1101 10:42:37.199060  371886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:37.226954  371886 pause.go:52] kubelet running: true
	I1101 10:42:37.227037  371886 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:42:37.409778  371886 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:42:37.409888  371886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:42:37.487751  371886 cri.go:89] found id: "89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894"
	I1101 10:42:37.487776  371886 cri.go:89] found id: "54e8ae0b6db53e2ff5bf08aa06547a75997a2eca66fbbef9a892fbd7dc99d491"
	I1101 10:42:37.487780  371886 cri.go:89] found id: "913b97b4016cb2e7253976bd632a5f8f0aa1b6488a6b0bb1cba4538206af541b"
	I1101 10:42:37.487782  371886 cri.go:89] found id: "115751d0762eefbe98efde6fb18bf1cb486efe2af3bd390d82cd343eaccc0b56"
	I1101 10:42:37.487785  371886 cri.go:89] found id: "69766ce1ba06cdc04392db038af2182e2d12f992966b11e4498d358ade540d98"
	I1101 10:42:37.487788  371886 cri.go:89] found id: "db082c42e2322ac77e4c7ac5029613f4fc315ba2c60b168fd3ad9b50ea598e6a"
	I1101 10:42:37.487790  371886 cri.go:89] found id: "c351b883f4c7425bf4220670aefd0ab86d65f31b59b246d15d5a0099457dce03"
	I1101 10:42:37.487792  371886 cri.go:89] found id: "0e2eee682652453663ca05634fbc994a3a996b9febb53a7bbd8e5ba7558b3a22"
	I1101 10:42:37.487795  371886 cri.go:89] found id: "27186a49df0ceda967ebf7847c9ede3092c812946cd2c021b530c97b5dd0302f"
	I1101 10:42:37.487801  371886 cri.go:89] found id: "89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc"
	I1101 10:42:37.487806  371886 cri.go:89] found id: "3ab51e7e8cc6bcb05ed3ab119166fd47bcd81f27d5f66ee5192503bfea0b2f11"
	I1101 10:42:37.487810  371886 cri.go:89] found id: ""
	I1101 10:42:37.487854  371886 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:42:37.500093  371886 retry.go:31] will retry after 203.570252ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:37Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:37.704562  371886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:37.717907  371886 pause.go:52] kubelet running: false
	I1101 10:42:37.717980  371886 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:42:37.878870  371886 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:42:37.878983  371886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:42:37.963426  371886 cri.go:89] found id: "89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894"
	I1101 10:42:37.963451  371886 cri.go:89] found id: "54e8ae0b6db53e2ff5bf08aa06547a75997a2eca66fbbef9a892fbd7dc99d491"
	I1101 10:42:37.963456  371886 cri.go:89] found id: "913b97b4016cb2e7253976bd632a5f8f0aa1b6488a6b0bb1cba4538206af541b"
	I1101 10:42:37.963463  371886 cri.go:89] found id: "115751d0762eefbe98efde6fb18bf1cb486efe2af3bd390d82cd343eaccc0b56"
	I1101 10:42:37.963467  371886 cri.go:89] found id: "69766ce1ba06cdc04392db038af2182e2d12f992966b11e4498d358ade540d98"
	I1101 10:42:37.963472  371886 cri.go:89] found id: "db082c42e2322ac77e4c7ac5029613f4fc315ba2c60b168fd3ad9b50ea598e6a"
	I1101 10:42:37.963475  371886 cri.go:89] found id: "c351b883f4c7425bf4220670aefd0ab86d65f31b59b246d15d5a0099457dce03"
	I1101 10:42:37.963479  371886 cri.go:89] found id: "0e2eee682652453663ca05634fbc994a3a996b9febb53a7bbd8e5ba7558b3a22"
	I1101 10:42:37.963489  371886 cri.go:89] found id: "27186a49df0ceda967ebf7847c9ede3092c812946cd2c021b530c97b5dd0302f"
	I1101 10:42:37.963562  371886 cri.go:89] found id: "89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc"
	I1101 10:42:37.963573  371886 cri.go:89] found id: "3ab51e7e8cc6bcb05ed3ab119166fd47bcd81f27d5f66ee5192503bfea0b2f11"
	I1101 10:42:37.963579  371886 cri.go:89] found id: ""
	I1101 10:42:37.963626  371886 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:42:37.977435  371886 retry.go:31] will retry after 241.819718ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:37Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:38.219669  371886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:38.234058  371886 pause.go:52] kubelet running: false
	I1101 10:42:38.234147  371886 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:42:38.391076  371886 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:42:38.391173  371886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:42:38.460990  371886 cri.go:89] found id: "89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894"
	I1101 10:42:38.461011  371886 cri.go:89] found id: "54e8ae0b6db53e2ff5bf08aa06547a75997a2eca66fbbef9a892fbd7dc99d491"
	I1101 10:42:38.461015  371886 cri.go:89] found id: "913b97b4016cb2e7253976bd632a5f8f0aa1b6488a6b0bb1cba4538206af541b"
	I1101 10:42:38.461018  371886 cri.go:89] found id: "115751d0762eefbe98efde6fb18bf1cb486efe2af3bd390d82cd343eaccc0b56"
	I1101 10:42:38.461032  371886 cri.go:89] found id: "69766ce1ba06cdc04392db038af2182e2d12f992966b11e4498d358ade540d98"
	I1101 10:42:38.461037  371886 cri.go:89] found id: "db082c42e2322ac77e4c7ac5029613f4fc315ba2c60b168fd3ad9b50ea598e6a"
	I1101 10:42:38.461042  371886 cri.go:89] found id: "c351b883f4c7425bf4220670aefd0ab86d65f31b59b246d15d5a0099457dce03"
	I1101 10:42:38.461045  371886 cri.go:89] found id: "0e2eee682652453663ca05634fbc994a3a996b9febb53a7bbd8e5ba7558b3a22"
	I1101 10:42:38.461049  371886 cri.go:89] found id: "27186a49df0ceda967ebf7847c9ede3092c812946cd2c021b530c97b5dd0302f"
	I1101 10:42:38.461057  371886 cri.go:89] found id: "89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc"
	I1101 10:42:38.461062  371886 cri.go:89] found id: "3ab51e7e8cc6bcb05ed3ab119166fd47bcd81f27d5f66ee5192503bfea0b2f11"
	I1101 10:42:38.461066  371886 cri.go:89] found id: ""
	I1101 10:42:38.461110  371886 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:42:38.473215  371886 retry.go:31] will retry after 562.550062ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:38Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:39.036692  371886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:39.050208  371886 pause.go:52] kubelet running: false
	I1101 10:42:39.050275  371886 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:42:39.212533  371886 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:42:39.212624  371886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:42:39.288803  371886 cri.go:89] found id: "89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894"
	I1101 10:42:39.288826  371886 cri.go:89] found id: "54e8ae0b6db53e2ff5bf08aa06547a75997a2eca66fbbef9a892fbd7dc99d491"
	I1101 10:42:39.288831  371886 cri.go:89] found id: "913b97b4016cb2e7253976bd632a5f8f0aa1b6488a6b0bb1cba4538206af541b"
	I1101 10:42:39.288836  371886 cri.go:89] found id: "115751d0762eefbe98efde6fb18bf1cb486efe2af3bd390d82cd343eaccc0b56"
	I1101 10:42:39.288841  371886 cri.go:89] found id: "69766ce1ba06cdc04392db038af2182e2d12f992966b11e4498d358ade540d98"
	I1101 10:42:39.288846  371886 cri.go:89] found id: "db082c42e2322ac77e4c7ac5029613f4fc315ba2c60b168fd3ad9b50ea598e6a"
	I1101 10:42:39.288850  371886 cri.go:89] found id: "c351b883f4c7425bf4220670aefd0ab86d65f31b59b246d15d5a0099457dce03"
	I1101 10:42:39.288854  371886 cri.go:89] found id: "0e2eee682652453663ca05634fbc994a3a996b9febb53a7bbd8e5ba7558b3a22"
	I1101 10:42:39.288859  371886 cri.go:89] found id: "27186a49df0ceda967ebf7847c9ede3092c812946cd2c021b530c97b5dd0302f"
	I1101 10:42:39.288871  371886 cri.go:89] found id: "89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc"
	I1101 10:42:39.288875  371886 cri.go:89] found id: "3ab51e7e8cc6bcb05ed3ab119166fd47bcd81f27d5f66ee5192503bfea0b2f11"
	I1101 10:42:39.288880  371886 cri.go:89] found id: ""
	I1101 10:42:39.288925  371886 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:42:39.305339  371886 out.go:203] 
	W1101 10:42:39.306916  371886 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:42:39.306936  371886 out.go:285] * 
	* 
	W1101 10:42:39.311457  371886 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:42:39.313189  371886 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-707467 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-707467
helpers_test.go:243: (dbg) docker inspect old-k8s-version-707467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f",
	        "Created": "2025-11-01T10:40:17.695472964Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 360102,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:41:33.903604338Z",
	            "FinishedAt": "2025-11-01T10:41:32.96171933Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f-json.log",
	        "Name": "/old-k8s-version-707467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-707467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-707467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f",
	                "LowerDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-707467",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-707467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-707467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-707467",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-707467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b6837656b7129922483dd0f8826644b0f74efdbe0a28c4d31242a0ad64a33e6",
	            "SandboxKey": "/var/run/docker/netns/9b6837656b71",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-707467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:76:1d:6f:8d:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "415a138baf910ff492e7f96276b65b02f48a203fb2684ca5f89bd5de7de466d7",
	                    "EndpointID": "a466e4232ae6a1349f3666210bea310f2fb48cb4b711091c5509ce05b3d06b3d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-707467",
	                        "1c1720e1071c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-707467 -n old-k8s-version-707467
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-707467 -n old-k8s-version-707467: exit status 2 (342.884737ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-707467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-707467 logs -n 25: (1.236373247s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo crio config                                                                                                                                                                                                     │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p custom-flannel-299863                                                                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p disable-driver-mounts-339061                                                                                                                                                                                                               │ disable-driver-mounts-339061 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-707467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p no-preload-753486 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p embed-certs-071527 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p no-preload-753486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:42:12
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:42:12.470667  368496 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:12.470926  368496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:12.470935  368496 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:12.470939  368496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:12.471197  368496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:42:12.471724  368496 out.go:368] Setting JSON to false
	I1101 10:42:12.473007  368496 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8672,"bootTime":1761985060,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:42:12.473093  368496 start.go:143] virtualization: kvm guest
	I1101 10:42:12.475052  368496 out.go:179] * [embed-certs-071527] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:42:12.476242  368496 notify.go:221] Checking for updates...
	I1101 10:42:12.476265  368496 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:42:12.477618  368496 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:42:12.479253  368496 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:12.480396  368496 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:42:12.481804  368496 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:42:12.482907  368496 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:42:12.484696  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:12.485407  368496 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:42:12.510178  368496 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:42:12.510319  368496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:12.566440  368496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:42:12.556585444 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:12.566630  368496 docker.go:319] overlay module found
	I1101 10:42:12.568236  368496 out.go:179] * Using the docker driver based on existing profile
	W1101 10:42:08.135661  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:10.135866  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:12.136114  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	I1101 10:42:12.569580  368496 start.go:309] selected driver: docker
	I1101 10:42:12.569598  368496 start.go:930] validating driver "docker" against &{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:12.569703  368496 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:42:12.570360  368496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:12.629103  368496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:42:12.61946754 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:12.629435  368496 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:12.629475  368496 cni.go:84] Creating CNI manager for ""
	I1101 10:42:12.629562  368496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:12.629618  368496 start.go:353] cluster config:
	{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:12.631965  368496 out.go:179] * Starting "embed-certs-071527" primary control-plane node in "embed-certs-071527" cluster
	I1101 10:42:12.633029  368496 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:42:12.634067  368496 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:42:12.635049  368496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:12.635095  368496 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:42:12.635108  368496 cache.go:59] Caching tarball of preloaded images
	I1101 10:42:12.635157  368496 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:42:12.635206  368496 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:42:12.635218  368496 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:42:12.635307  368496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:42:12.655932  368496 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:42:12.655974  368496 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:42:12.655999  368496 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:42:12.656030  368496 start.go:360] acquireMachinesLock for embed-certs-071527: {Name:mk6e96a90f486564e010d9ea6bfd4c480f872098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:42:12.656092  368496 start.go:364] duration metric: took 43.15µs to acquireMachinesLock for "embed-certs-071527"
	I1101 10:42:12.656114  368496 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:42:12.656125  368496 fix.go:54] fixHost starting: 
	I1101 10:42:12.656377  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:12.675012  368496 fix.go:112] recreateIfNeeded on embed-certs-071527: state=Stopped err=<nil>
	W1101 10:42:12.675043  368496 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:42:09.661111  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:11.661382  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:12.483873  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	W1101 10:42:14.484054  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	I1101 10:42:12.676748  368496 out.go:252] * Restarting existing docker container for "embed-certs-071527" ...
	I1101 10:42:12.676817  368496 cli_runner.go:164] Run: docker start embed-certs-071527
	I1101 10:42:12.931557  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:12.950645  368496 kic.go:430] container "embed-certs-071527" state is running.
	I1101 10:42:12.951070  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:12.969851  368496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:42:12.970221  368496 machine.go:94] provisionDockerMachine start ...
	I1101 10:42:12.970300  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:12.990251  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:12.990557  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:12.990574  368496 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:42:12.991359  368496 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50424->127.0.0.1:33118: read: connection reset by peer
	I1101 10:42:16.134232  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:42:16.134260  368496 ubuntu.go:182] provisioning hostname "embed-certs-071527"
	I1101 10:42:16.134338  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.152535  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:16.152846  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:16.152872  368496 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-071527 && echo "embed-certs-071527" | sudo tee /etc/hostname
	I1101 10:42:16.304442  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:42:16.304550  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.321748  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:16.321964  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:16.321985  368496 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-071527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-071527/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-071527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:42:16.463326  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:42:16.463363  368496 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:42:16.463390  368496 ubuntu.go:190] setting up certificates
	I1101 10:42:16.463404  368496 provision.go:84] configureAuth start
	I1101 10:42:16.463473  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:16.480950  368496 provision.go:143] copyHostCerts
	I1101 10:42:16.481017  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:42:16.481036  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:42:16.481123  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:42:16.481275  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:42:16.481286  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:42:16.481327  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:42:16.481445  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:42:16.481456  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:42:16.481487  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:42:16.481616  368496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-071527 san=[127.0.0.1 192.168.103.2 embed-certs-071527 localhost minikube]
	I1101 10:42:16.916939  368496 provision.go:177] copyRemoteCerts
	I1101 10:42:16.917007  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:42:16.917041  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.934924  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.035944  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:42:17.054849  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:42:17.073166  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:42:17.092130  368496 provision.go:87] duration metric: took 628.710617ms to configureAuth
	I1101 10:42:17.092165  368496 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:42:17.092378  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:17.092532  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.110753  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:17.111008  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:17.111031  368496 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:42:17.409882  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:42:17.409917  368496 machine.go:97] duration metric: took 4.439676339s to provisionDockerMachine
	I1101 10:42:17.409931  368496 start.go:293] postStartSetup for "embed-certs-071527" (driver="docker")
	I1101 10:42:17.409943  368496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:42:17.410023  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:42:17.410075  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.428602  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	W1101 10:42:14.634914  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:16.636505  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:14.161336  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:16.661601  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	I1101 10:42:17.531781  368496 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:42:17.536220  368496 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:42:17.536251  368496 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:42:17.536265  368496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:42:17.536325  368496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:42:17.536436  368496 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:42:17.536597  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:42:17.545281  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:42:17.563349  368496 start.go:296] duration metric: took 153.401996ms for postStartSetup
	I1101 10:42:17.563435  368496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:42:17.563473  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.580861  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.681364  368496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:42:17.686230  368496 fix.go:56] duration metric: took 5.030091922s for fixHost
	I1101 10:42:17.686258  368496 start.go:83] releasing machines lock for "embed-certs-071527", held for 5.030152616s
	I1101 10:42:17.686321  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:17.703788  368496 ssh_runner.go:195] Run: cat /version.json
	I1101 10:42:17.703833  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.703876  368496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:42:17.703957  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.723866  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.723875  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.886271  368496 ssh_runner.go:195] Run: systemctl --version
	I1101 10:42:17.892773  368496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:42:17.929416  368496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:42:17.934199  368496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:42:17.934268  368496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:42:17.942176  368496 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:42:17.942203  368496 start.go:496] detecting cgroup driver to use...
	I1101 10:42:17.942232  368496 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:42:17.942277  368496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:42:17.956846  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:42:17.969926  368496 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:42:17.969984  368496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:42:17.987763  368496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:42:18.000787  368496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:42:18.098750  368496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:42:18.185364  368496 docker.go:234] disabling docker service ...
	I1101 10:42:18.185425  368496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:42:18.200171  368496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:42:18.212245  368496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:42:18.299968  368496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:42:18.389487  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:42:18.402323  368496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:42:18.417595  368496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:42:18.417646  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.426413  368496 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:42:18.426460  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.438201  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.448731  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.457647  368496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:42:18.465716  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.474643  368496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.483603  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.494225  368496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:42:18.503559  368496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:42:18.511049  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:18.598345  368496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:42:18.709217  368496 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:42:18.709288  368496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:42:18.713313  368496 start.go:564] Will wait 60s for crictl version
	I1101 10:42:18.713366  368496 ssh_runner.go:195] Run: which crictl
	I1101 10:42:18.716906  368496 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:42:18.741616  368496 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:42:18.741679  368496 ssh_runner.go:195] Run: crio --version
	I1101 10:42:18.769631  368496 ssh_runner.go:195] Run: crio --version
	I1101 10:42:18.799572  368496 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:42:18.800779  368496 cli_runner.go:164] Run: docker network inspect embed-certs-071527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:42:18.817146  368496 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 10:42:18.821475  368496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:42:18.831787  368496 kubeadm.go:884] updating cluster {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:42:18.831915  368496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:18.831968  368496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:18.866384  368496 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:18.866405  368496 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:42:18.866449  368496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:18.892169  368496 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:18.892192  368496 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:42:18.892200  368496 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 10:42:18.892301  368496 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-071527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:42:18.892380  368496 ssh_runner.go:195] Run: crio config
	I1101 10:42:18.938000  368496 cni.go:84] Creating CNI manager for ""
	I1101 10:42:18.938023  368496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:18.938041  368496 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:42:18.938063  368496 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-071527 NodeName:embed-certs-071527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:42:18.938182  368496 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-071527"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:42:18.938242  368496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:42:18.946826  368496 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:42:18.946897  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:42:18.954801  368496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1101 10:42:18.967590  368496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:42:18.981433  368496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1101 10:42:18.994976  368496 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:42:18.998531  368496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:42:19.009380  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:19.091222  368496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:19.122489  368496 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527 for IP: 192.168.103.2
	I1101 10:42:19.122542  368496 certs.go:195] generating shared ca certs ...
	I1101 10:42:19.122564  368496 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:19.122731  368496 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:42:19.122792  368496 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:42:19.122807  368496 certs.go:257] generating profile certs ...
	I1101 10:42:19.122926  368496 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.key
	I1101 10:42:19.122986  368496 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key.afddc8c1
	I1101 10:42:19.123047  368496 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key
	I1101 10:42:19.123182  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:42:19.123233  368496 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:42:19.123245  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:42:19.123280  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:42:19.123308  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:42:19.123337  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:42:19.123388  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:42:19.124208  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:42:19.146314  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:42:19.168951  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:42:19.192551  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:42:19.220147  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:42:19.245723  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:42:19.268283  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:42:19.289183  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:42:19.311754  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:42:19.333810  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:42:19.356124  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:42:19.377800  368496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:42:19.393408  368496 ssh_runner.go:195] Run: openssl version
	I1101 10:42:19.401003  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:42:19.411579  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.415878  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.415933  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.471208  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:42:19.482043  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:42:19.492517  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.497198  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.497248  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.553784  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:42:19.564362  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:42:19.574902  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.579592  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.579650  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.633944  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:42:19.645552  368496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:42:19.650875  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:42:19.710929  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:42:19.765523  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:42:19.828247  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:42:19.877548  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:42:19.933659  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:42:19.992714  368496 kubeadm.go:401] StartCluster: {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:19.992866  368496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:42:19.992928  368496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:42:20.036018  368496 cri.go:89] found id: "e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44"
	I1101 10:42:20.036180  368496 cri.go:89] found id: "1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b"
	I1101 10:42:20.036188  368496 cri.go:89] found id: "cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f"
	I1101 10:42:20.036193  368496 cri.go:89] found id: "2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d"
	I1101 10:42:20.036197  368496 cri.go:89] found id: ""
	I1101 10:42:20.036250  368496 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:42:20.052319  368496 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:20Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:20.052419  368496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:42:20.064481  368496 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:42:20.064516  368496 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:42:20.064563  368496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:42:20.076775  368496 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:42:20.077819  368496 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-071527" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:20.078753  368496 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-58021/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-071527" cluster setting kubeconfig missing "embed-certs-071527" context setting]
	I1101 10:42:20.079735  368496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.081920  368496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:42:20.093440  368496 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1101 10:42:20.093482  368496 kubeadm.go:602] duration metric: took 28.955359ms to restartPrimaryControlPlane
	I1101 10:42:20.093501  368496 kubeadm.go:403] duration metric: took 100.790269ms to StartCluster
	I1101 10:42:20.093522  368496 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.093670  368496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:20.096021  368496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.096378  368496 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:42:20.096664  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:20.096725  368496 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:42:20.096815  368496 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-071527"
	I1101 10:42:20.096843  368496 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-071527"
	W1101 10:42:20.096857  368496 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:42:20.096891  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.097441  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.097446  368496 addons.go:70] Setting default-storageclass=true in profile "embed-certs-071527"
	I1101 10:42:20.097475  368496 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-071527"
	I1101 10:42:20.097611  368496 addons.go:70] Setting dashboard=true in profile "embed-certs-071527"
	I1101 10:42:20.097644  368496 addons.go:239] Setting addon dashboard=true in "embed-certs-071527"
	W1101 10:42:20.097654  368496 addons.go:248] addon dashboard should already be in state true
	I1101 10:42:20.097688  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.097873  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.098187  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.098234  368496 out.go:179] * Verifying Kubernetes components...
	I1101 10:42:20.102685  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:20.124516  368496 addons.go:239] Setting addon default-storageclass=true in "embed-certs-071527"
	W1101 10:42:20.124543  368496 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:42:20.124572  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.125148  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.126383  368496 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:42:20.126448  368496 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:42:20.127475  368496 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:42:20.127505  368496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:42:20.127560  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.129082  368496 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1101 10:42:16.983584  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	I1101 10:42:17.983724  365377 pod_ready.go:94] pod "coredns-66bc5c9577-6zph7" is "Ready"
	I1101 10:42:17.983754  365377 pod_ready.go:86] duration metric: took 9.505816997s for pod "coredns-66bc5c9577-6zph7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:17.985915  365377 pod_ready.go:83] waiting for pod "etcd-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.491849  365377 pod_ready.go:94] pod "etcd-no-preload-753486" is "Ready"
	I1101 10:42:18.491875  365377 pod_ready.go:86] duration metric: took 505.934613ms for pod "etcd-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.494221  365377 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.498465  365377 pod_ready.go:94] pod "kube-apiserver-no-preload-753486" is "Ready"
	I1101 10:42:18.498489  365377 pod_ready.go:86] duration metric: took 4.246373ms for pod "kube-apiserver-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.500663  365377 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:20.511850  365377 pod_ready.go:104] pod "kube-controller-manager-no-preload-753486" is not "Ready", error: <nil>
	I1101 10:42:20.130030  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:42:20.130050  368496 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:42:20.130125  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.155538  368496 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:42:20.155564  368496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:42:20.155623  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.163671  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.169694  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.191939  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.288119  368496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:20.306159  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:42:20.306194  368496 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:42:20.310206  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:42:20.315991  368496 node_ready.go:35] waiting up to 6m0s for node "embed-certs-071527" to be "Ready" ...
	I1101 10:42:20.325168  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:42:20.333743  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:42:20.333815  368496 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:42:20.355195  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:42:20.355226  368496 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:42:20.378242  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:42:20.378264  368496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:42:20.400055  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:42:20.400089  368496 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:42:20.417257  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:42:20.417297  368496 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:42:20.434766  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:42:20.434792  368496 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:42:20.452816  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:42:20.452852  368496 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:42:20.470856  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:42:20.470887  368496 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:42:20.489267  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:42:21.827019  368496 node_ready.go:49] node "embed-certs-071527" is "Ready"
	I1101 10:42:21.827060  368496 node_ready.go:38] duration metric: took 1.511035582s for node "embed-certs-071527" to be "Ready" ...
	I1101 10:42:21.827077  368496 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:42:21.827147  368496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:42:22.482041  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.17180187s)
	I1101 10:42:22.482106  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.156906265s)
	I1101 10:42:22.482192  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.992885947s)
	I1101 10:42:22.482250  368496 api_server.go:72] duration metric: took 2.385830473s to wait for apiserver process to appear ...
	I1101 10:42:22.482267  368496 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:42:22.482351  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:22.483670  368496 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-071527 addons enable metrics-server
	
	I1101 10:42:22.489684  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:22.489716  368496 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:22.495086  368496 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1101 10:42:19.136186  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:21.136738  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:19.162930  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:21.661978  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	I1101 10:42:22.667611  359640 pod_ready.go:94] pod "coredns-5dd5756b68-9fdk6" is "Ready"
	I1101 10:42:22.667642  359640 pod_ready.go:86] duration metric: took 37.512281759s for pod "coredns-5dd5756b68-9fdk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.672431  359640 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.679390  359640 pod_ready.go:94] pod "etcd-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.679419  359640 pod_ready.go:86] duration metric: took 6.957128ms for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.685128  359640 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.690874  359640 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.690900  359640 pod_ready.go:86] duration metric: took 5.745955ms for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.695536  359640 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.860629  359640 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.860711  359640 pod_ready.go:86] duration metric: took 165.147298ms for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.059741  359640 pod_ready.go:83] waiting for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.459343  359640 pod_ready.go:94] pod "kube-proxy-2pbws" is "Ready"
	I1101 10:42:23.459373  359640 pod_ready.go:86] duration metric: took 399.595768ms for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:23.010300  365377 pod_ready.go:104] pod "kube-controller-manager-no-preload-753486" is not "Ready", error: <nil>
	I1101 10:42:23.507130  365377 pod_ready.go:94] pod "kube-controller-manager-no-preload-753486" is "Ready"
	I1101 10:42:23.507157  365377 pod_ready.go:86] duration metric: took 5.00647596s for pod "kube-controller-manager-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.510616  365377 pod_ready.go:83] waiting for pod "kube-proxy-d5hv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.515189  365377 pod_ready.go:94] pod "kube-proxy-d5hv4" is "Ready"
	I1101 10:42:23.515214  365377 pod_ready.go:86] duration metric: took 4.571417ms for pod "kube-proxy-d5hv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.517263  365377 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.781834  365377 pod_ready.go:94] pod "kube-scheduler-no-preload-753486" is "Ready"
	I1101 10:42:23.781860  365377 pod_ready.go:86] duration metric: took 264.579645ms for pod "kube-scheduler-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.781872  365377 pod_ready.go:40] duration metric: took 15.30754162s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:23.838199  365377 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:42:23.841685  365377 out.go:179] * Done! kubectl is now configured to use "no-preload-753486" cluster and "default" namespace by default
	I1101 10:42:23.660338  359640 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:24.061116  359640 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-707467" is "Ready"
	I1101 10:42:24.061146  359640 pod_ready.go:86] duration metric: took 400.77729ms for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:24.061163  359640 pod_ready.go:40] duration metric: took 38.910389326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:24.128259  359640 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 10:42:24.167901  359640 out.go:203] 
	W1101 10:42:24.180817  359640 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:42:24.182810  359640 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:42:24.187301  359640 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-707467" cluster and "default" namespace by default
	I1101 10:42:22.496547  368496 addons.go:515] duration metric: took 2.399817846s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:42:22.982984  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:22.989290  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:22.989326  368496 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:23.483006  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:23.488530  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 10:42:23.489654  368496 api_server.go:141] control plane version: v1.34.1
	I1101 10:42:23.489681  368496 api_server.go:131] duration metric: took 1.007346794s to wait for apiserver health ...
	I1101 10:42:23.489692  368496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:42:23.493313  368496 system_pods.go:59] 8 kube-system pods found
	I1101 10:42:23.493343  368496 system_pods.go:61] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:23.493350  368496 system_pods.go:61] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:23.493357  368496 system_pods.go:61] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:42:23.493362  368496 system_pods.go:61] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:23.493367  368496 system_pods.go:61] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:23.493374  368496 system_pods.go:61] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:42:23.493378  368496 system_pods.go:61] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:23.493390  368496 system_pods.go:61] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Running
	I1101 10:42:23.493401  368496 system_pods.go:74] duration metric: took 3.702533ms to wait for pod list to return data ...
	I1101 10:42:23.493411  368496 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:42:23.496249  368496 default_sa.go:45] found service account: "default"
	I1101 10:42:23.496271  368496 default_sa.go:55] duration metric: took 2.852113ms for default service account to be created ...
	I1101 10:42:23.496282  368496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:42:23.499163  368496 system_pods.go:86] 8 kube-system pods found
	I1101 10:42:23.499204  368496 system_pods.go:89] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:23.499215  368496 system_pods.go:89] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:23.499233  368496 system_pods.go:89] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:42:23.499243  368496 system_pods.go:89] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:23.499289  368496 system_pods.go:89] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:23.499304  368496 system_pods.go:89] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:42:23.499316  368496 system_pods.go:89] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:23.499322  368496 system_pods.go:89] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Running
	I1101 10:42:23.499332  368496 system_pods.go:126] duration metric: took 3.043029ms to wait for k8s-apps to be running ...
	I1101 10:42:23.499341  368496 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:42:23.499395  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:23.518085  368496 system_svc.go:56] duration metric: took 18.734056ms WaitForService to wait for kubelet
	I1101 10:42:23.518112  368496 kubeadm.go:587] duration metric: took 3.421696433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:23.518132  368496 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:42:23.521173  368496 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:42:23.521202  368496 node_conditions.go:123] node cpu capacity is 8
	I1101 10:42:23.521216  368496 node_conditions.go:105] duration metric: took 3.079009ms to run NodePressure ...
	I1101 10:42:23.521237  368496 start.go:242] waiting for startup goroutines ...
	I1101 10:42:23.521252  368496 start.go:247] waiting for cluster config update ...
	I1101 10:42:23.521272  368496 start.go:256] writing updated cluster config ...
	I1101 10:42:23.521614  368496 ssh_runner.go:195] Run: rm -f paused
	I1101 10:42:23.525820  368496 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:23.530097  368496 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5td8" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:25.535303  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:23.138242  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:25.635848  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:27.536545  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:30.038586  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:27.636001  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:29.636138  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:32.135485  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:32.535642  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:35.035519  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:37.036976  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:34.136045  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:36.635396  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 01 10:42:03 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:03.438864688Z" level=info msg="Started container" PID=1714 containerID=95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper id=d64eaf75-c55a-439c-9ba2-9b9e01302b4c name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0d9ffdcfd0ab93f14a2e566caf3a68ef6380bd2f233e1126ee10c36265375f9
	Nov 01 10:42:04 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:04.392162326Z" level=info msg="Removing container: c106a72ce8260f7f8651b575237df15eb413cfc78ac90d3da8066f48ffa95086" id=a601c85f-f8d8-48a7-8189-c9de487c2fa1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:04 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:04.404576343Z" level=info msg="Removed container c106a72ce8260f7f8651b575237df15eb413cfc78ac90d3da8066f48ffa95086: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper" id=a601c85f-f8d8-48a7-8189-c9de487c2fa1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.421952995Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0039f4c4-cc6c-448b-8103-4e37252f351a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.422810375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=24fe31e9-69b6-4c0b-8aa9-da759192f305 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.423764623Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=85d6ec1e-e2cb-49bf-bc79-2af5a96517e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.423870489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.428004739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.428206727Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e91c0d18a43204f0c9b817ae891a926c7e353f7c0d53c56a232d7eae9b0b570c/merged/etc/passwd: no such file or directory"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.428249623Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e91c0d18a43204f0c9b817ae891a926c7e353f7c0d53c56a232d7eae9b0b570c/merged/etc/group: no such file or directory"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.428565458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.451085093Z" level=info msg="Created container 89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894: kube-system/storage-provisioner/storage-provisioner" id=85d6ec1e-e2cb-49bf-bc79-2af5a96517e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.451739393Z" level=info msg="Starting container: 89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894" id=fac47bc5-83db-4153-a433-f5c9e32b3cbb name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.453432384Z" level=info msg="Started container" PID=1729 containerID=89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894 description=kube-system/storage-provisioner/storage-provisioner id=fac47bc5-83db-4153-a433-f5c9e32b3cbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=cefbb88db3a66ba001450e1a1a0ddc2023808efbd75238fa04e9b657079bf155
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.289457196Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ad8a7eb1-91d5-433c-8d77-d6e9003c2dd5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.290403655Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=228589a6-4f2e-4ed7-91b7-ff5b8fc6ffaf name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.291458376Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper" id=2b9af76f-f7ea-4f97-ac62-3ac98259f60c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.291772322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.299521751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.300152506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.337892874Z" level=info msg="Created container 89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper" id=2b9af76f-f7ea-4f97-ac62-3ac98259f60c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.338718554Z" level=info msg="Starting container: 89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc" id=c0d96ce6-200f-4cdb-96e2-9143ab01dfe9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.340816836Z" level=info msg="Started container" PID=1745 containerID=89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper id=c0d96ce6-200f-4cdb-96e2-9143ab01dfe9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0d9ffdcfd0ab93f14a2e566caf3a68ef6380bd2f233e1126ee10c36265375f9
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.433886572Z" level=info msg="Removing container: 95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65" id=df4461f9-5be3-4d8f-89e3-5aa8ba8fc60c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.444461181Z" level=info msg="Removed container 95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper" id=df4461f9-5be3-4d8f-89e3-5aa8ba8fc60c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	89e763df94703       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   f0d9ffdcfd0ab       dashboard-metrics-scraper-5f989dc9cf-6vrff       kubernetes-dashboard
	89bd390fe4bf1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   cefbb88db3a66       storage-provisioner                              kube-system
	3ab51e7e8cc6b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   575e6dc1834ce       kubernetes-dashboard-8694d4445c-d6xpb            kubernetes-dashboard
	54e8ae0b6db53       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   99220ee7b216b       coredns-5dd5756b68-9fdk6                         kube-system
	68e5cfc42e350       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   59b2bea21cba8       busybox                                          default
	913b97b4016cb       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   e2373e177e7ca       kube-proxy-2pbws                                 kube-system
	115751d0762ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   cefbb88db3a66       storage-provisioner                              kube-system
	69766ce1ba06c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   3c13e12b48275       kindnet-xxlgz                                    kube-system
	db082c42e2322       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   4b4340a970fa6       kube-scheduler-old-k8s-version-707467            kube-system
	c351b883f4c74       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   eed449378dac6       etcd-old-k8s-version-707467                      kube-system
	0e2eee6826524       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   1d827a1c2e17a       kube-controller-manager-old-k8s-version-707467   kube-system
	27186a49df0ce       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   843fe98b963fb       kube-apiserver-old-k8s-version-707467            kube-system
	
	
	==> coredns [54e8ae0b6db53e2ff5bf08aa06547a75997a2eca66fbbef9a892fbd7dc99d491] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54721 - 12065 "HINFO IN 7409137930354339314.867580938443163605. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.035349298s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-707467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-707467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=old-k8s-version-707467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_40_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:40:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-707467
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:42:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:42:14 +0000   Sat, 01 Nov 2025 10:40:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:42:14 +0000   Sat, 01 Nov 2025 10:40:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:42:14 +0000   Sat, 01 Nov 2025 10:40:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:42:14 +0000   Sat, 01 Nov 2025 10:41:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-707467
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                c2fc43a3-538e-4e6c-a223-e8844e524c0a
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-9fdk6                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-old-k8s-version-707467                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m8s
	  kube-system                 kindnet-xxlgz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-707467             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-old-k8s-version-707467    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-2pbws                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-707467             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-6vrff        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-d6xpb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m7s               kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s               kubelet          Node old-k8s-version-707467 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s               kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s               node-controller  Node old-k8s-version-707467 event: Registered Node old-k8s-version-707467 in Controller
	  Normal  NodeReady                100s               kubelet          Node old-k8s-version-707467 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node old-k8s-version-707467 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-707467 event: Registered Node old-k8s-version-707467 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[Nov 1 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	
	
	==> etcd [c351b883f4c7425bf4220670aefd0ab86d65f31b59b246d15d5a0099457dce03] <==
	{"level":"info","ts":"2025-11-01T10:41:40.883052Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:41:40.883065Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:41:40.883333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-01T10:41:40.883403Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:41:40.883539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:41:40.883583Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:41:40.885786Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:41:40.885868Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-01T10:41:40.885902Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-01T10:41:40.886133Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:41:40.88617Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:41:42.571375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:41:42.571438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:41:42.57146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-01T10:41:42.571478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:41:42.571486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-01T10:41:42.571529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:41:42.571543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-01T10:41:42.573062Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-707467 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:41:42.573065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:41:42.573087Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:41:42.573354Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:41:42.573385Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:41:42.575299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:41:42.57537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 10:42:40 up  2:25,  0 user,  load average: 4.49, 3.85, 2.51
	Linux old-k8s-version-707467 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [69766ce1ba06cdc04392db038af2182e2d12f992966b11e4498d358ade540d98] <==
	I1101 10:41:44.799820       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:41:44.800075       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 10:41:44.800258       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:41:44.800355       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:41:44.800400       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:41:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:41:45.094349       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:41:45.094387       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:41:45.094397       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:41:45.192820       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:41:45.502443       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:41:45.502472       1 metrics.go:72] Registering metrics
	I1101 10:41:45.502573       1 controller.go:711] "Syncing nftables rules"
	I1101 10:41:55.096622       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:41:55.096660       1 main.go:301] handling current node
	I1101 10:42:05.094295       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:42:05.094344       1 main.go:301] handling current node
	I1101 10:42:15.095115       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:42:15.095185       1 main.go:301] handling current node
	I1101 10:42:25.095997       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:42:25.096037       1 main.go:301] handling current node
	I1101 10:42:35.098574       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:42:35.098611       1 main.go:301] handling current node
	
	
	==> kube-apiserver [27186a49df0ceda967ebf7847c9ede3092c812946cd2c021b530c97b5dd0302f] <==
	I1101 10:41:43.616123       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:41:43.662249       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:41:43.662703       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 10:41:43.662786       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 10:41:43.662822       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:41:43.662836       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:41:43.662843       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:41:43.662849       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:41:43.664031       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 10:41:43.664916       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:41:43.664969       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 10:41:43.664984       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1101 10:41:43.675567       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:41:43.679891       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:41:44.480291       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:41:44.511172       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:41:44.528103       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:41:44.535266       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:41:44.542863       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:41:44.567730       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:41:44.584320       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.94.226"}
	I1101 10:41:44.605741       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.231.116"}
	I1101 10:41:56.309189       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:41:56.457679       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:41:56.526207       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0e2eee682652453663ca05634fbc994a3a996b9febb53a7bbd8e5ba7558b3a22] <==
	I1101 10:41:56.312940       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1101 10:41:56.315398       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1101 10:41:56.325850       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-d6xpb"
	I1101 10:41:56.329389       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-6vrff"
	I1101 10:41:56.342322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.969831ms"
	I1101 10:41:56.350313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.637808ms"
	I1101 10:41:56.356115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.731178ms"
	I1101 10:41:56.356209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.603µs"
	I1101 10:41:56.363706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.820047ms"
	I1101 10:41:56.363850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.756µs"
	I1101 10:41:56.367928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="109.37µs"
	I1101 10:41:56.379330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.4µs"
	I1101 10:41:56.525885       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:41:56.536954       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1101 10:41:56.589293       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:41:56.589436       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:42:01.495489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.135158ms"
	I1101 10:42:01.496002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="102.334µs"
	I1101 10:42:03.397221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.483µs"
	I1101 10:42:04.406136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.237µs"
	I1101 10:42:05.408984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.293µs"
	I1101 10:42:18.444759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.191µs"
	I1101 10:42:22.648289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.831634ms"
	I1101 10:42:22.648585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.855µs"
	I1101 10:42:26.653324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.12µs"
	
	
	==> kube-proxy [913b97b4016cb2e7253976bd632a5f8f0aa1b6488a6b0bb1cba4538206af541b] <==
	I1101 10:41:44.726534       1 server_others.go:69] "Using iptables proxy"
	I1101 10:41:44.737735       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1101 10:41:44.756404       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:41:44.758920       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:41:44.758955       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:41:44.758961       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:41:44.758992       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:41:44.759271       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:41:44.759328       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:41:44.760040       1 config.go:188] "Starting service config controller"
	I1101 10:41:44.760456       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:41:44.760338       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:41:44.760527       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:41:44.760368       1 config.go:315] "Starting node config controller"
	I1101 10:41:44.760610       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:41:44.861628       1 shared_informer.go:318] Caches are synced for node config
	I1101 10:41:44.861825       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 10:41:44.861911       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [db082c42e2322ac77e4c7ac5029613f4fc315ba2c60b168fd3ad9b50ea598e6a] <==
	I1101 10:41:41.494556       1 serving.go:348] Generated self-signed cert in-memory
	I1101 10:41:43.650362       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 10:41:43.651447       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:41:43.655454       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 10:41:43.655542       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:41:43.655562       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 10:41:43.655547       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 10:41:43.655522       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:41:43.655687       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:41:43.656629       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 10:41:43.656714       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 10:41:43.756141       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:41:43.756154       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1101 10:41:43.756218       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.340832     720 topology_manager.go:215] "Topology Admit Handler" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-6vrff"
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.395046     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssptf\" (UniqueName: \"kubernetes.io/projected/a21016e2-b599-41cb-ba24-a99f52c2ff2b-kube-api-access-ssptf\") pod \"dashboard-metrics-scraper-5f989dc9cf-6vrff\" (UID: \"a21016e2-b599-41cb-ba24-a99f52c2ff2b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff"
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.395297     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhknq\" (UniqueName: \"kubernetes.io/projected/a3296379-d073-4ef5-882d-36bc6b0d6961-kube-api-access-dhknq\") pod \"kubernetes-dashboard-8694d4445c-d6xpb\" (UID: \"a3296379-d073-4ef5-882d-36bc6b0d6961\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d6xpb"
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.395361     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a21016e2-b599-41cb-ba24-a99f52c2ff2b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-6vrff\" (UID: \"a21016e2-b599-41cb-ba24-a99f52c2ff2b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff"
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.395403     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a3296379-d073-4ef5-882d-36bc6b0d6961-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-d6xpb\" (UID: \"a3296379-d073-4ef5-882d-36bc6b0d6961\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d6xpb"
	Nov 01 10:42:03 old-k8s-version-707467 kubelet[720]: I1101 10:42:03.386106     720 scope.go:117] "RemoveContainer" containerID="c106a72ce8260f7f8651b575237df15eb413cfc78ac90d3da8066f48ffa95086"
	Nov 01 10:42:03 old-k8s-version-707467 kubelet[720]: I1101 10:42:03.397065     720 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d6xpb" podStartSLOduration=3.58007456 podCreationTimestamp="2025-11-01 10:41:56 +0000 UTC" firstStartedPulling="2025-11-01 10:41:56.686914809 +0000 UTC m=+16.495880372" lastFinishedPulling="2025-11-01 10:42:00.503833189 +0000 UTC m=+20.312798750" observedRunningTime="2025-11-01 10:42:01.421291142 +0000 UTC m=+21.230256739" watchObservedRunningTime="2025-11-01 10:42:03.396992938 +0000 UTC m=+23.205958563"
	Nov 01 10:42:04 old-k8s-version-707467 kubelet[720]: I1101 10:42:04.390772     720 scope.go:117] "RemoveContainer" containerID="c106a72ce8260f7f8651b575237df15eb413cfc78ac90d3da8066f48ffa95086"
	Nov 01 10:42:04 old-k8s-version-707467 kubelet[720]: I1101 10:42:04.390923     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:04 old-k8s-version-707467 kubelet[720]: E1101 10:42:04.391294     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:05 old-k8s-version-707467 kubelet[720]: I1101 10:42:05.395469     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:05 old-k8s-version-707467 kubelet[720]: E1101 10:42:05.396985     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:06 old-k8s-version-707467 kubelet[720]: I1101 10:42:06.643797     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:06 old-k8s-version-707467 kubelet[720]: E1101 10:42:06.644127     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:15 old-k8s-version-707467 kubelet[720]: I1101 10:42:15.421542     720 scope.go:117] "RemoveContainer" containerID="115751d0762eefbe98efde6fb18bf1cb486efe2af3bd390d82cd343eaccc0b56"
	Nov 01 10:42:18 old-k8s-version-707467 kubelet[720]: I1101 10:42:18.288816     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:18 old-k8s-version-707467 kubelet[720]: I1101 10:42:18.432676     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:18 old-k8s-version-707467 kubelet[720]: I1101 10:42:18.432948     720 scope.go:117] "RemoveContainer" containerID="89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc"
	Nov 01 10:42:18 old-k8s-version-707467 kubelet[720]: E1101 10:42:18.433319     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:26 old-k8s-version-707467 kubelet[720]: I1101 10:42:26.643387     720 scope.go:117] "RemoveContainer" containerID="89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc"
	Nov 01 10:42:26 old-k8s-version-707467 kubelet[720]: E1101 10:42:26.643803     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:37 old-k8s-version-707467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:42:37 old-k8s-version-707467 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:42:37 old-k8s-version-707467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:42:37 old-k8s-version-707467 systemd[1]: kubelet.service: Consumed 1.618s CPU time.
	
	
	==> kubernetes-dashboard [3ab51e7e8cc6bcb05ed3ab119166fd47bcd81f27d5f66ee5192503bfea0b2f11] <==
	2025/11/01 10:42:00 Using namespace: kubernetes-dashboard
	2025/11/01 10:42:00 Using in-cluster config to connect to apiserver
	2025/11/01 10:42:00 Using secret token for csrf signing
	2025/11/01 10:42:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:42:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:42:00 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 10:42:00 Generating JWE encryption key
	2025/11/01 10:42:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:42:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:42:00 Initializing JWE encryption key from synchronized object
	2025/11/01 10:42:00 Creating in-cluster Sidecar client
	2025/11/01 10:42:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:42:00 Serving insecurely on HTTP port: 9090
	2025/11/01 10:42:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:42:00 Starting overwatch
	
	
	==> storage-provisioner [115751d0762eefbe98efde6fb18bf1cb486efe2af3bd390d82cd343eaccc0b56] <==
	I1101 10:41:44.684928       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:42:14.689129       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894] <==
	I1101 10:42:15.464670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:42:15.472618       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:42:15.472661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:42:32.869185       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:42:32.869342       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-707467_664555b9-0c5f-42fd-9371-aa4049299cfc!
	I1101 10:42:32.869311       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97b1eecb-ad8d-49bf-af88-e6407fe47b1a", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-707467_664555b9-0c5f-42fd-9371-aa4049299cfc became leader
	I1101 10:42:32.969613       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-707467_664555b9-0c5f-42fd-9371-aa4049299cfc!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-707467 -n old-k8s-version-707467
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-707467 -n old-k8s-version-707467: exit status 2 (348.146781ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-707467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-707467
helpers_test.go:243: (dbg) docker inspect old-k8s-version-707467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f",
	        "Created": "2025-11-01T10:40:17.695472964Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 360102,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:41:33.903604338Z",
	            "FinishedAt": "2025-11-01T10:41:32.96171933Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/hosts",
	        "LogPath": "/var/lib/docker/containers/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f/1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f-json.log",
	        "Name": "/old-k8s-version-707467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-707467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-707467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c1720e1071c1ffd03c216d45efcb1857edc7aa0c8e7920b124d155bc9716b3f",
	                "LowerDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7160d1b5f0bf0a1a80f7e6224067bd12b5c005fbd450c5ac9cab1240620258c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-707467",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-707467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-707467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-707467",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-707467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b6837656b7129922483dd0f8826644b0f74efdbe0a28c4d31242a0ad64a33e6",
	            "SandboxKey": "/var/run/docker/netns/9b6837656b71",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-707467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:76:1d:6f:8d:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "415a138baf910ff492e7f96276b65b02f48a203fb2684ca5f89bd5de7de466d7",
	                    "EndpointID": "a466e4232ae6a1349f3666210bea310f2fb48cb4b711091c5509ce05b3d06b3d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-707467",
	                        "1c1720e1071c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-707467 -n old-k8s-version-707467
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-707467 -n old-k8s-version-707467: exit status 2 (332.841494ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-707467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-707467 logs -n 25: (1.093653247s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-299863 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo crio config                                                                                                                                                                                                     │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p custom-flannel-299863                                                                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p disable-driver-mounts-339061                                                                                                                                                                                                               │ disable-driver-mounts-339061 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-707467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p no-preload-753486 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p embed-certs-071527 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p no-preload-753486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:42:12
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:42:12.470667  368496 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:12.470926  368496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:12.470935  368496 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:12.470939  368496 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:12.471197  368496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:42:12.471724  368496 out.go:368] Setting JSON to false
	I1101 10:42:12.473007  368496 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8672,"bootTime":1761985060,"procs":349,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:42:12.473093  368496 start.go:143] virtualization: kvm guest
	I1101 10:42:12.475052  368496 out.go:179] * [embed-certs-071527] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:42:12.476242  368496 notify.go:221] Checking for updates...
	I1101 10:42:12.476265  368496 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:42:12.477618  368496 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:42:12.479253  368496 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:12.480396  368496 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:42:12.481804  368496 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:42:12.482907  368496 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:42:12.484696  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:12.485407  368496 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:42:12.510178  368496 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:42:12.510319  368496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:12.566440  368496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:42:12.556585444 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:12.566630  368496 docker.go:319] overlay module found
	I1101 10:42:12.568236  368496 out.go:179] * Using the docker driver based on existing profile
	W1101 10:42:08.135661  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:10.135866  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:12.136114  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	I1101 10:42:12.569580  368496 start.go:309] selected driver: docker
	I1101 10:42:12.569598  368496 start.go:930] validating driver "docker" against &{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:12.569703  368496 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:42:12.570360  368496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:12.629103  368496 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 10:42:12.61946754 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:12.629435  368496 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:12.629475  368496 cni.go:84] Creating CNI manager for ""
	I1101 10:42:12.629562  368496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:12.629618  368496 start.go:353] cluster config:
	{Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:12.631965  368496 out.go:179] * Starting "embed-certs-071527" primary control-plane node in "embed-certs-071527" cluster
	I1101 10:42:12.633029  368496 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:42:12.634067  368496 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:42:12.635049  368496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:12.635095  368496 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:42:12.635108  368496 cache.go:59] Caching tarball of preloaded images
	I1101 10:42:12.635157  368496 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:42:12.635206  368496 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:42:12.635218  368496 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:42:12.635307  368496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:42:12.655932  368496 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:42:12.655974  368496 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:42:12.655999  368496 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:42:12.656030  368496 start.go:360] acquireMachinesLock for embed-certs-071527: {Name:mk6e96a90f486564e010d9ea6bfd4c480f872098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:42:12.656092  368496 start.go:364] duration metric: took 43.15µs to acquireMachinesLock for "embed-certs-071527"
	I1101 10:42:12.656114  368496 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:42:12.656125  368496 fix.go:54] fixHost starting: 
	I1101 10:42:12.656377  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:12.675012  368496 fix.go:112] recreateIfNeeded on embed-certs-071527: state=Stopped err=<nil>
	W1101 10:42:12.675043  368496 fix.go:138] unexpected machine state, will restart: <nil>
	W1101 10:42:09.661111  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:11.661382  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:12.483873  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	W1101 10:42:14.484054  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	I1101 10:42:12.676748  368496 out.go:252] * Restarting existing docker container for "embed-certs-071527" ...
	I1101 10:42:12.676817  368496 cli_runner.go:164] Run: docker start embed-certs-071527
	I1101 10:42:12.931557  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:12.950645  368496 kic.go:430] container "embed-certs-071527" state is running.
	I1101 10:42:12.951070  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:12.969851  368496 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/config.json ...
	I1101 10:42:12.970221  368496 machine.go:94] provisionDockerMachine start ...
	I1101 10:42:12.970300  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:12.990251  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:12.990557  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:12.990574  368496 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:42:12.991359  368496 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50424->127.0.0.1:33118: read: connection reset by peer
	I1101 10:42:16.134232  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:42:16.134260  368496 ubuntu.go:182] provisioning hostname "embed-certs-071527"
	I1101 10:42:16.134338  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.152535  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:16.152846  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:16.152872  368496 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-071527 && echo "embed-certs-071527" | sudo tee /etc/hostname
	I1101 10:42:16.304442  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-071527
	
	I1101 10:42:16.304550  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.321748  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:16.321964  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:16.321985  368496 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-071527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-071527/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-071527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:42:16.463326  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:42:16.463363  368496 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:42:16.463390  368496 ubuntu.go:190] setting up certificates
	I1101 10:42:16.463404  368496 provision.go:84] configureAuth start
	I1101 10:42:16.463473  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:16.480950  368496 provision.go:143] copyHostCerts
	I1101 10:42:16.481017  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:42:16.481036  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:42:16.481123  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:42:16.481275  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:42:16.481286  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:42:16.481327  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:42:16.481445  368496 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:42:16.481456  368496 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:42:16.481487  368496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:42:16.481616  368496 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-071527 san=[127.0.0.1 192.168.103.2 embed-certs-071527 localhost minikube]
	I1101 10:42:16.916939  368496 provision.go:177] copyRemoteCerts
	I1101 10:42:16.917007  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:42:16.917041  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:16.934924  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.035944  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:42:17.054849  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:42:17.073166  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:42:17.092130  368496 provision.go:87] duration metric: took 628.710617ms to configureAuth
	I1101 10:42:17.092165  368496 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:42:17.092378  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:17.092532  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.110753  368496 main.go:143] libmachine: Using SSH client type: native
	I1101 10:42:17.111008  368496 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 10:42:17.111031  368496 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:42:17.409882  368496 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:42:17.409917  368496 machine.go:97] duration metric: took 4.439676339s to provisionDockerMachine
	I1101 10:42:17.409931  368496 start.go:293] postStartSetup for "embed-certs-071527" (driver="docker")
	I1101 10:42:17.409943  368496 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:42:17.410023  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:42:17.410075  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.428602  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	W1101 10:42:14.634914  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:16.636505  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:14.161336  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:16.661601  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	I1101 10:42:17.531781  368496 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:42:17.536220  368496 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:42:17.536251  368496 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:42:17.536265  368496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:42:17.536325  368496 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:42:17.536436  368496 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:42:17.536597  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:42:17.545281  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:42:17.563349  368496 start.go:296] duration metric: took 153.401996ms for postStartSetup
	I1101 10:42:17.563435  368496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:42:17.563473  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.580861  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.681364  368496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:42:17.686230  368496 fix.go:56] duration metric: took 5.030091922s for fixHost
	I1101 10:42:17.686258  368496 start.go:83] releasing machines lock for "embed-certs-071527", held for 5.030152616s
	I1101 10:42:17.686321  368496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-071527
	I1101 10:42:17.703788  368496 ssh_runner.go:195] Run: cat /version.json
	I1101 10:42:17.703833  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.703876  368496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:42:17.703957  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:17.723866  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.723875  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:17.886271  368496 ssh_runner.go:195] Run: systemctl --version
	I1101 10:42:17.892773  368496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:42:17.929416  368496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:42:17.934199  368496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:42:17.934268  368496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:42:17.942176  368496 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:42:17.942203  368496 start.go:496] detecting cgroup driver to use...
	I1101 10:42:17.942232  368496 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:42:17.942277  368496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:42:17.956846  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:42:17.969926  368496 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:42:17.969984  368496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:42:17.987763  368496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:42:18.000787  368496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:42:18.098750  368496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:42:18.185364  368496 docker.go:234] disabling docker service ...
	I1101 10:42:18.185425  368496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:42:18.200171  368496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:42:18.212245  368496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:42:18.299968  368496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:42:18.389487  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:42:18.402323  368496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:42:18.417595  368496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:42:18.417646  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.426413  368496 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:42:18.426460  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.438201  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.448731  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.457647  368496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:42:18.465716  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.474643  368496 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.483603  368496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:42:18.494225  368496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:42:18.503559  368496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:42:18.511049  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:18.598345  368496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:42:18.709217  368496 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:42:18.709288  368496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:42:18.713313  368496 start.go:564] Will wait 60s for crictl version
	I1101 10:42:18.713366  368496 ssh_runner.go:195] Run: which crictl
	I1101 10:42:18.716906  368496 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:42:18.741616  368496 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:42:18.741679  368496 ssh_runner.go:195] Run: crio --version
	I1101 10:42:18.769631  368496 ssh_runner.go:195] Run: crio --version
	I1101 10:42:18.799572  368496 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:42:18.800779  368496 cli_runner.go:164] Run: docker network inspect embed-certs-071527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:42:18.817146  368496 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 10:42:18.821475  368496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:42:18.831787  368496 kubeadm.go:884] updating cluster {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:42:18.831915  368496 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:18.831968  368496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:18.866384  368496 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:18.866405  368496 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:42:18.866449  368496 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:42:18.892169  368496 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:42:18.892192  368496 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:42:18.892200  368496 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 10:42:18.892301  368496 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-071527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:42:18.892380  368496 ssh_runner.go:195] Run: crio config
	I1101 10:42:18.938000  368496 cni.go:84] Creating CNI manager for ""
	I1101 10:42:18.938023  368496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:18.938041  368496 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:42:18.938063  368496 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-071527 NodeName:embed-certs-071527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:42:18.938182  368496 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-071527"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:42:18.938242  368496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:42:18.946826  368496 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:42:18.946897  368496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:42:18.954801  368496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1101 10:42:18.967590  368496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:42:18.981433  368496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1101 10:42:18.994976  368496 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:42:18.998531  368496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:42:19.009380  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:19.091222  368496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:19.122489  368496 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527 for IP: 192.168.103.2
	I1101 10:42:19.122542  368496 certs.go:195] generating shared ca certs ...
	I1101 10:42:19.122564  368496 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:19.122731  368496 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:42:19.122792  368496 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:42:19.122807  368496 certs.go:257] generating profile certs ...
	I1101 10:42:19.122926  368496 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/client.key
	I1101 10:42:19.122986  368496 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key.afddc8c1
	I1101 10:42:19.123047  368496 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key
	I1101 10:42:19.123182  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:42:19.123233  368496 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:42:19.123245  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:42:19.123280  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:42:19.123308  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:42:19.123337  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:42:19.123388  368496 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:42:19.124208  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:42:19.146314  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:42:19.168951  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:42:19.192551  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:42:19.220147  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:42:19.245723  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:42:19.268283  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:42:19.289183  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/embed-certs-071527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:42:19.311754  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:42:19.333810  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:42:19.356124  368496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:42:19.377800  368496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:42:19.393408  368496 ssh_runner.go:195] Run: openssl version
	I1101 10:42:19.401003  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:42:19.411579  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.415878  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.415933  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:42:19.471208  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:42:19.482043  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:42:19.492517  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.497198  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.497248  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:42:19.553784  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:42:19.564362  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:42:19.574902  368496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.579592  368496 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.579650  368496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:42:19.633944  368496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:42:19.645552  368496 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:42:19.650875  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:42:19.710929  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:42:19.765523  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:42:19.828247  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:42:19.877548  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:42:19.933659  368496 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:42:19.992714  368496 kubeadm.go:401] StartCluster: {Name:embed-certs-071527 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-071527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:19.992866  368496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:42:19.992928  368496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:42:20.036018  368496 cri.go:89] found id: "e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44"
	I1101 10:42:20.036180  368496 cri.go:89] found id: "1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b"
	I1101 10:42:20.036188  368496 cri.go:89] found id: "cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f"
	I1101 10:42:20.036193  368496 cri.go:89] found id: "2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d"
	I1101 10:42:20.036197  368496 cri.go:89] found id: ""
	I1101 10:42:20.036250  368496 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 10:42:20.052319  368496 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:20Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:42:20.052419  368496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:42:20.064481  368496 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:42:20.064516  368496 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:42:20.064563  368496 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:42:20.076775  368496 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:42:20.077819  368496 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-071527" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:20.078753  368496 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-58021/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-071527" cluster setting kubeconfig missing "embed-certs-071527" context setting]
	I1101 10:42:20.079735  368496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.081920  368496 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:42:20.093440  368496 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1101 10:42:20.093482  368496 kubeadm.go:602] duration metric: took 28.955359ms to restartPrimaryControlPlane
	I1101 10:42:20.093501  368496 kubeadm.go:403] duration metric: took 100.790269ms to StartCluster
	I1101 10:42:20.093522  368496 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.093670  368496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:20.096021  368496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:20.096378  368496 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:42:20.096664  368496 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:20.096725  368496 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:42:20.096815  368496 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-071527"
	I1101 10:42:20.096843  368496 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-071527"
	W1101 10:42:20.096857  368496 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:42:20.096891  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.097441  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.097446  368496 addons.go:70] Setting default-storageclass=true in profile "embed-certs-071527"
	I1101 10:42:20.097475  368496 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-071527"
	I1101 10:42:20.097611  368496 addons.go:70] Setting dashboard=true in profile "embed-certs-071527"
	I1101 10:42:20.097644  368496 addons.go:239] Setting addon dashboard=true in "embed-certs-071527"
	W1101 10:42:20.097654  368496 addons.go:248] addon dashboard should already be in state true
	I1101 10:42:20.097688  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.097873  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.098187  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.098234  368496 out.go:179] * Verifying Kubernetes components...
	I1101 10:42:20.102685  368496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:42:20.124516  368496 addons.go:239] Setting addon default-storageclass=true in "embed-certs-071527"
	W1101 10:42:20.124543  368496 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:42:20.124572  368496 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:42:20.125148  368496 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:42:20.126383  368496 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 10:42:20.126448  368496 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:42:20.127475  368496 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:42:20.127505  368496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:42:20.127560  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.129082  368496 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1101 10:42:16.983584  365377 pod_ready.go:104] pod "coredns-66bc5c9577-6zph7" is not "Ready", error: node "no-preload-753486" hosting pod "coredns-66bc5c9577-6zph7" is not "Ready" (will retry)
	I1101 10:42:17.983724  365377 pod_ready.go:94] pod "coredns-66bc5c9577-6zph7" is "Ready"
	I1101 10:42:17.983754  365377 pod_ready.go:86] duration metric: took 9.505816997s for pod "coredns-66bc5c9577-6zph7" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:17.985915  365377 pod_ready.go:83] waiting for pod "etcd-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.491849  365377 pod_ready.go:94] pod "etcd-no-preload-753486" is "Ready"
	I1101 10:42:18.491875  365377 pod_ready.go:86] duration metric: took 505.934613ms for pod "etcd-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.494221  365377 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.498465  365377 pod_ready.go:94] pod "kube-apiserver-no-preload-753486" is "Ready"
	I1101 10:42:18.498489  365377 pod_ready.go:86] duration metric: took 4.246373ms for pod "kube-apiserver-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:18.500663  365377 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:20.511850  365377 pod_ready.go:104] pod "kube-controller-manager-no-preload-753486" is not "Ready", error: <nil>
	I1101 10:42:20.130030  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:42:20.130050  368496 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:42:20.130125  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.155538  368496 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:42:20.155564  368496 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:42:20.155623  368496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:42:20.163671  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.169694  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.191939  368496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:42:20.288119  368496 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:42:20.306159  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:42:20.306194  368496 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:42:20.310206  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:42:20.315991  368496 node_ready.go:35] waiting up to 6m0s for node "embed-certs-071527" to be "Ready" ...
	I1101 10:42:20.325168  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:42:20.333743  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:42:20.333815  368496 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:42:20.355195  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:42:20.355226  368496 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:42:20.378242  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:42:20.378264  368496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:42:20.400055  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:42:20.400089  368496 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 10:42:20.417257  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:42:20.417297  368496 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:42:20.434766  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:42:20.434792  368496 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:42:20.452816  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:42:20.452852  368496 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:42:20.470856  368496 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:42:20.470887  368496 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:42:20.489267  368496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:42:21.827019  368496 node_ready.go:49] node "embed-certs-071527" is "Ready"
	I1101 10:42:21.827060  368496 node_ready.go:38] duration metric: took 1.511035582s for node "embed-certs-071527" to be "Ready" ...
	I1101 10:42:21.827077  368496 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:42:21.827147  368496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:42:22.482041  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.17180187s)
	I1101 10:42:22.482106  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.156906265s)
	I1101 10:42:22.482192  368496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.992885947s)
	I1101 10:42:22.482250  368496 api_server.go:72] duration metric: took 2.385830473s to wait for apiserver process to appear ...
	I1101 10:42:22.482267  368496 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:42:22.482351  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:22.483670  368496 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-071527 addons enable metrics-server
	
	I1101 10:42:22.489684  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:22.489716  368496 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:22.495086  368496 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1101 10:42:19.136186  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:21.136738  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:19.162930  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	W1101 10:42:21.661978  359640 pod_ready.go:104] pod "coredns-5dd5756b68-9fdk6" is not "Ready", error: <nil>
	I1101 10:42:22.667611  359640 pod_ready.go:94] pod "coredns-5dd5756b68-9fdk6" is "Ready"
	I1101 10:42:22.667642  359640 pod_ready.go:86] duration metric: took 37.512281759s for pod "coredns-5dd5756b68-9fdk6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.672431  359640 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.679390  359640 pod_ready.go:94] pod "etcd-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.679419  359640 pod_ready.go:86] duration metric: took 6.957128ms for pod "etcd-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.685128  359640 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.690874  359640 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.690900  359640 pod_ready.go:86] duration metric: took 5.745955ms for pod "kube-apiserver-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.695536  359640 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:22.860629  359640 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-707467" is "Ready"
	I1101 10:42:22.860711  359640 pod_ready.go:86] duration metric: took 165.147298ms for pod "kube-controller-manager-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.059741  359640 pod_ready.go:83] waiting for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.459343  359640 pod_ready.go:94] pod "kube-proxy-2pbws" is "Ready"
	I1101 10:42:23.459373  359640 pod_ready.go:86] duration metric: took 399.595768ms for pod "kube-proxy-2pbws" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:23.010300  365377 pod_ready.go:104] pod "kube-controller-manager-no-preload-753486" is not "Ready", error: <nil>
	I1101 10:42:23.507130  365377 pod_ready.go:94] pod "kube-controller-manager-no-preload-753486" is "Ready"
	I1101 10:42:23.507157  365377 pod_ready.go:86] duration metric: took 5.00647596s for pod "kube-controller-manager-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.510616  365377 pod_ready.go:83] waiting for pod "kube-proxy-d5hv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.515189  365377 pod_ready.go:94] pod "kube-proxy-d5hv4" is "Ready"
	I1101 10:42:23.515214  365377 pod_ready.go:86] duration metric: took 4.571417ms for pod "kube-proxy-d5hv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.517263  365377 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.781834  365377 pod_ready.go:94] pod "kube-scheduler-no-preload-753486" is "Ready"
	I1101 10:42:23.781860  365377 pod_ready.go:86] duration metric: took 264.579645ms for pod "kube-scheduler-no-preload-753486" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:23.781872  365377 pod_ready.go:40] duration metric: took 15.30754162s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:23.838199  365377 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:42:23.841685  365377 out.go:179] * Done! kubectl is now configured to use "no-preload-753486" cluster and "default" namespace by default
	I1101 10:42:23.660338  359640 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:24.061116  359640 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-707467" is "Ready"
	I1101 10:42:24.061146  359640 pod_ready.go:86] duration metric: took 400.77729ms for pod "kube-scheduler-old-k8s-version-707467" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:24.061163  359640 pod_ready.go:40] duration metric: took 38.910389326s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:24.128259  359640 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 10:42:24.167901  359640 out.go:203] 
	W1101 10:42:24.180817  359640 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 10:42:24.182810  359640 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:42:24.187301  359640 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-707467" cluster and "default" namespace by default
	I1101 10:42:22.496547  368496 addons.go:515] duration metric: took 2.399817846s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 10:42:22.982984  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:22.989290  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:42:22.989326  368496 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:42:23.483006  368496 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 10:42:23.488530  368496 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 10:42:23.489654  368496 api_server.go:141] control plane version: v1.34.1
	I1101 10:42:23.489681  368496 api_server.go:131] duration metric: took 1.007346794s to wait for apiserver health ...
	I1101 10:42:23.489692  368496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:42:23.493313  368496 system_pods.go:59] 8 kube-system pods found
	I1101 10:42:23.493343  368496 system_pods.go:61] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:23.493350  368496 system_pods.go:61] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:23.493357  368496 system_pods.go:61] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:42:23.493362  368496 system_pods.go:61] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:23.493367  368496 system_pods.go:61] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:23.493374  368496 system_pods.go:61] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:42:23.493378  368496 system_pods.go:61] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:23.493390  368496 system_pods.go:61] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Running
	I1101 10:42:23.493401  368496 system_pods.go:74] duration metric: took 3.702533ms to wait for pod list to return data ...
	I1101 10:42:23.493411  368496 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:42:23.496249  368496 default_sa.go:45] found service account: "default"
	I1101 10:42:23.496271  368496 default_sa.go:55] duration metric: took 2.852113ms for default service account to be created ...
	I1101 10:42:23.496282  368496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:42:23.499163  368496 system_pods.go:86] 8 kube-system pods found
	I1101 10:42:23.499204  368496 system_pods.go:89] "coredns-66bc5c9577-c5td8" [8b884210-c20d-49e8-a595-b5d5e54a2362] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:23.499215  368496 system_pods.go:89] "etcd-embed-certs-071527" [d8a6e438-eddd-43f3-9608-3a008687442f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:42:23.499233  368496 system_pods.go:89] "kindnet-m4vzv" [ca8c842c-8f8c-46c9-844e-fa29b8bec68b] Running
	I1101 10:42:23.499243  368496 system_pods.go:89] "kube-apiserver-embed-certs-071527" [bd3db226-4dbc-4d1f-93ad-55ea39ecb425] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:42:23.499289  368496 system_pods.go:89] "kube-controller-manager-embed-certs-071527" [badbd218-84da-4a8a-b62d-3b8c2a60e20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:42:23.499304  368496 system_pods.go:89] "kube-proxy-l5pzc" [0d6bc572-4a6b-44f1-988f-6aa83896b936] Running
	I1101 10:42:23.499316  368496 system_pods.go:89] "kube-scheduler-embed-certs-071527" [44b21383-497b-452f-b64b-1792f143b547] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:42:23.499322  368496 system_pods.go:89] "storage-provisioner" [ff05c619-0eb3-487b-91e5-6e63996f8329] Running
	I1101 10:42:23.499332  368496 system_pods.go:126] duration metric: took 3.043029ms to wait for k8s-apps to be running ...
	I1101 10:42:23.499341  368496 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:42:23.499395  368496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:23.518085  368496 system_svc.go:56] duration metric: took 18.734056ms WaitForService to wait for kubelet
	I1101 10:42:23.518112  368496 kubeadm.go:587] duration metric: took 3.421696433s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:23.518132  368496 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:42:23.521173  368496 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:42:23.521202  368496 node_conditions.go:123] node cpu capacity is 8
	I1101 10:42:23.521216  368496 node_conditions.go:105] duration metric: took 3.079009ms to run NodePressure ...
	I1101 10:42:23.521237  368496 start.go:242] waiting for startup goroutines ...
	I1101 10:42:23.521252  368496 start.go:247] waiting for cluster config update ...
	I1101 10:42:23.521272  368496 start.go:256] writing updated cluster config ...
	I1101 10:42:23.521614  368496 ssh_runner.go:195] Run: rm -f paused
	I1101 10:42:23.525820  368496 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:23.530097  368496 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c5td8" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:42:25.535303  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:23.138242  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:25.635848  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:27.536545  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:30.038586  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:27.636001  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:29.636138  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:32.135485  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:32.535642  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:35.035519  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:37.036976  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:34.136045  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	W1101 10:42:36.635396  358231 node_ready.go:57] node "default-k8s-diff-port-433711" has "Ready":"False" status (will retry)
	I1101 10:42:37.637483  358231 node_ready.go:49] node "default-k8s-diff-port-433711" is "Ready"
	I1101 10:42:37.637527  358231 node_ready.go:38] duration metric: took 41.005444852s for node "default-k8s-diff-port-433711" to be "Ready" ...
	I1101 10:42:37.637549  358231 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:42:37.637613  358231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:42:37.655480  358231 api_server.go:72] duration metric: took 41.448330118s to wait for apiserver process to appear ...
	I1101 10:42:37.655529  358231 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:42:37.655555  358231 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:42:37.661709  358231 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:42:37.663038  358231 api_server.go:141] control plane version: v1.34.1
	I1101 10:42:37.663063  358231 api_server.go:131] duration metric: took 7.526728ms to wait for apiserver health ...
	I1101 10:42:37.663071  358231 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:42:37.666824  358231 system_pods.go:59] 8 kube-system pods found
	I1101 10:42:37.666864  358231 system_pods.go:61] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:37.666874  358231 system_pods.go:61] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running
	I1101 10:42:37.666888  358231 system_pods.go:61] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:42:37.666897  358231 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running
	I1101 10:42:37.666906  358231 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running
	I1101 10:42:37.666913  358231 system_pods.go:61] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:42:37.666920  358231 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running
	I1101 10:42:37.666929  358231 system_pods.go:61] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:42:37.666942  358231 system_pods.go:74] duration metric: took 3.863228ms to wait for pod list to return data ...
	I1101 10:42:37.666972  358231 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:42:37.679734  358231 default_sa.go:45] found service account: "default"
	I1101 10:42:37.679754  358231 default_sa.go:55] duration metric: took 12.775997ms for default service account to be created ...
	I1101 10:42:37.679763  358231 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:42:37.683014  358231 system_pods.go:86] 8 kube-system pods found
	I1101 10:42:37.683051  358231 system_pods.go:89] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:37.683061  358231 system_pods.go:89] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running
	I1101 10:42:37.683069  358231 system_pods.go:89] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:42:37.683075  358231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running
	I1101 10:42:37.683079  358231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running
	I1101 10:42:37.683082  358231 system_pods.go:89] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:42:37.683088  358231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running
	I1101 10:42:37.683096  358231 system_pods.go:89] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:42:37.683124  358231 retry.go:31] will retry after 277.137242ms: missing components: kube-dns
	I1101 10:42:37.964611  358231 system_pods.go:86] 8 kube-system pods found
	I1101 10:42:37.964646  358231 system_pods.go:89] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:37.964654  358231 system_pods.go:89] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running
	I1101 10:42:37.964664  358231 system_pods.go:89] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:42:37.964673  358231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running
	I1101 10:42:37.964685  358231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running
	I1101 10:42:37.964694  358231 system_pods.go:89] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:42:37.964701  358231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running
	I1101 10:42:37.964715  358231 system_pods.go:89] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:42:37.964739  358231 retry.go:31] will retry after 367.399137ms: missing components: kube-dns
	I1101 10:42:38.336918  358231 system_pods.go:86] 8 kube-system pods found
	I1101 10:42:38.336949  358231 system_pods.go:89] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:38.336955  358231 system_pods.go:89] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running
	I1101 10:42:38.336961  358231 system_pods.go:89] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:42:38.336964  358231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running
	I1101 10:42:38.336968  358231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running
	I1101 10:42:38.336971  358231 system_pods.go:89] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:42:38.336977  358231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running
	I1101 10:42:38.336985  358231 system_pods.go:89] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:42:38.337001  358231 retry.go:31] will retry after 410.399943ms: missing components: kube-dns
	I1101 10:42:38.751298  358231 system_pods.go:86] 8 kube-system pods found
	I1101 10:42:38.751322  358231 system_pods.go:89] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Running
	I1101 10:42:38.751328  358231 system_pods.go:89] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running
	I1101 10:42:38.751332  358231 system_pods.go:89] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:42:38.751336  358231 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running
	I1101 10:42:38.751340  358231 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running
	I1101 10:42:38.751343  358231 system_pods.go:89] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:42:38.751346  358231 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running
	I1101 10:42:38.751349  358231 system_pods.go:89] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Running
	I1101 10:42:38.751357  358231 system_pods.go:126] duration metric: took 1.071588311s to wait for k8s-apps to be running ...
	I1101 10:42:38.751367  358231 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:42:38.751407  358231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:42:38.764356  358231 system_svc.go:56] duration metric: took 12.981463ms WaitForService to wait for kubelet
	I1101 10:42:38.764385  358231 kubeadm.go:587] duration metric: took 42.557247184s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:42:38.764409  358231 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:42:38.767840  358231 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:42:38.767865  358231 node_conditions.go:123] node cpu capacity is 8
	I1101 10:42:38.767883  358231 node_conditions.go:105] duration metric: took 3.467955ms to run NodePressure ...
	I1101 10:42:38.767900  358231 start.go:242] waiting for startup goroutines ...
	I1101 10:42:38.767915  358231 start.go:247] waiting for cluster config update ...
	I1101 10:42:38.767932  358231 start.go:256] writing updated cluster config ...
	I1101 10:42:38.768198  358231 ssh_runner.go:195] Run: rm -f paused
	I1101 10:42:38.771969  358231 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:38.776092  358231 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v7tvt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:38.780689  358231 pod_ready.go:94] pod "coredns-66bc5c9577-v7tvt" is "Ready"
	I1101 10:42:38.780711  358231 pod_ready.go:86] duration metric: took 4.596914ms for pod "coredns-66bc5c9577-v7tvt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:38.782689  358231 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:38.786821  358231 pod_ready.go:94] pod "etcd-default-k8s-diff-port-433711" is "Ready"
	I1101 10:42:38.786840  358231 pod_ready.go:86] duration metric: took 4.132133ms for pod "etcd-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:38.790835  358231 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:38.794920  358231 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-433711" is "Ready"
	I1101 10:42:38.794937  358231 pod_ready.go:86] duration metric: took 4.075876ms for pod "kube-apiserver-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:38.796772  358231 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:39.176658  358231 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-433711" is "Ready"
	I1101 10:42:39.176688  358231 pod_ready.go:86] duration metric: took 379.896408ms for pod "kube-controller-manager-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:39.377394  358231 pod_ready.go:83] waiting for pod "kube-proxy-2g94q" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:39.776461  358231 pod_ready.go:94] pod "kube-proxy-2g94q" is "Ready"
	I1101 10:42:39.776488  358231 pod_ready.go:86] duration metric: took 399.063584ms for pod "kube-proxy-2g94q" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:39.977040  358231 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:40.376954  358231 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-433711" is "Ready"
	I1101 10:42:40.376979  358231 pod_ready.go:86] duration metric: took 399.914108ms for pod "kube-scheduler-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:42:40.376991  358231 pod_ready.go:40] duration metric: took 1.604997936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:42:40.428204  358231 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:42:40.429535  358231 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-433711" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:42:03 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:03.438864688Z" level=info msg="Started container" PID=1714 containerID=95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper id=d64eaf75-c55a-439c-9ba2-9b9e01302b4c name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0d9ffdcfd0ab93f14a2e566caf3a68ef6380bd2f233e1126ee10c36265375f9
	Nov 01 10:42:04 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:04.392162326Z" level=info msg="Removing container: c106a72ce8260f7f8651b575237df15eb413cfc78ac90d3da8066f48ffa95086" id=a601c85f-f8d8-48a7-8189-c9de487c2fa1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:04 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:04.404576343Z" level=info msg="Removed container c106a72ce8260f7f8651b575237df15eb413cfc78ac90d3da8066f48ffa95086: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper" id=a601c85f-f8d8-48a7-8189-c9de487c2fa1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.421952995Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0039f4c4-cc6c-448b-8103-4e37252f351a name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.422810375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=24fe31e9-69b6-4c0b-8aa9-da759192f305 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.423764623Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=85d6ec1e-e2cb-49bf-bc79-2af5a96517e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.423870489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.428004739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.428206727Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e91c0d18a43204f0c9b817ae891a926c7e353f7c0d53c56a232d7eae9b0b570c/merged/etc/passwd: no such file or directory"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.428249623Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e91c0d18a43204f0c9b817ae891a926c7e353f7c0d53c56a232d7eae9b0b570c/merged/etc/group: no such file or directory"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.428565458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.451085093Z" level=info msg="Created container 89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894: kube-system/storage-provisioner/storage-provisioner" id=85d6ec1e-e2cb-49bf-bc79-2af5a96517e4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.451739393Z" level=info msg="Starting container: 89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894" id=fac47bc5-83db-4153-a433-f5c9e32b3cbb name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:15 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:15.453432384Z" level=info msg="Started container" PID=1729 containerID=89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894 description=kube-system/storage-provisioner/storage-provisioner id=fac47bc5-83db-4153-a433-f5c9e32b3cbb name=/runtime.v1.RuntimeService/StartContainer sandboxID=cefbb88db3a66ba001450e1a1a0ddc2023808efbd75238fa04e9b657079bf155
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.289457196Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ad8a7eb1-91d5-433c-8d77-d6e9003c2dd5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.290403655Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=228589a6-4f2e-4ed7-91b7-ff5b8fc6ffaf name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.291458376Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper" id=2b9af76f-f7ea-4f97-ac62-3ac98259f60c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.291772322Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.299521751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.300152506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.337892874Z" level=info msg="Created container 89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper" id=2b9af76f-f7ea-4f97-ac62-3ac98259f60c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.338718554Z" level=info msg="Starting container: 89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc" id=c0d96ce6-200f-4cdb-96e2-9143ab01dfe9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.340816836Z" level=info msg="Started container" PID=1745 containerID=89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper id=c0d96ce6-200f-4cdb-96e2-9143ab01dfe9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0d9ffdcfd0ab93f14a2e566caf3a68ef6380bd2f233e1126ee10c36265375f9
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.433886572Z" level=info msg="Removing container: 95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65" id=df4461f9-5be3-4d8f-89e3-5aa8ba8fc60c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:18 old-k8s-version-707467 crio[565]: time="2025-11-01T10:42:18.444461181Z" level=info msg="Removed container 95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff/dashboard-metrics-scraper" id=df4461f9-5be3-4d8f-89e3-5aa8ba8fc60c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	89e763df94703       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   f0d9ffdcfd0ab       dashboard-metrics-scraper-5f989dc9cf-6vrff       kubernetes-dashboard
	89bd390fe4bf1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   cefbb88db3a66       storage-provisioner                              kube-system
	3ab51e7e8cc6b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago       Running             kubernetes-dashboard        0                   575e6dc1834ce       kubernetes-dashboard-8694d4445c-d6xpb            kubernetes-dashboard
	54e8ae0b6db53       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   99220ee7b216b       coredns-5dd5756b68-9fdk6                         kube-system
	68e5cfc42e350       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   59b2bea21cba8       busybox                                          default
	913b97b4016cb       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   e2373e177e7ca       kube-proxy-2pbws                                 kube-system
	115751d0762ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   cefbb88db3a66       storage-provisioner                              kube-system
	69766ce1ba06c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   3c13e12b48275       kindnet-xxlgz                                    kube-system
	db082c42e2322       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   4b4340a970fa6       kube-scheduler-old-k8s-version-707467            kube-system
	c351b883f4c74       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   eed449378dac6       etcd-old-k8s-version-707467                      kube-system
	0e2eee6826524       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   1d827a1c2e17a       kube-controller-manager-old-k8s-version-707467   kube-system
	27186a49df0ce       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   843fe98b963fb       kube-apiserver-old-k8s-version-707467            kube-system
	
	
	==> coredns [54e8ae0b6db53e2ff5bf08aa06547a75997a2eca66fbbef9a892fbd7dc99d491] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54721 - 12065 "HINFO IN 7409137930354339314.867580938443163605. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.035349298s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-707467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-707467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=old-k8s-version-707467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_40_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:40:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-707467
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:42:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:42:14 +0000   Sat, 01 Nov 2025 10:40:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:42:14 +0000   Sat, 01 Nov 2025 10:40:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:42:14 +0000   Sat, 01 Nov 2025 10:40:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:42:14 +0000   Sat, 01 Nov 2025 10:41:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-707467
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                c2fc43a3-538e-4e6c-a223-e8844e524c0a
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-5dd5756b68-9fdk6                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     116s
	  kube-system                 etcd-old-k8s-version-707467                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m10s
	  kube-system                 kindnet-xxlgz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-old-k8s-version-707467             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-old-k8s-version-707467    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-2pbws                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-old-k8s-version-707467             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-6vrff        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-d6xpb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 115s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s               kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s               kubelet          Node old-k8s-version-707467 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s               kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m9s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           117s               node-controller  Node old-k8s-version-707467 event: Registered Node old-k8s-version-707467 in Controller
	  Normal  NodeReady                102s               kubelet          Node old-k8s-version-707467 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node old-k8s-version-707467 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node old-k8s-version-707467 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node old-k8s-version-707467 event: Registered Node old-k8s-version-707467 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[Nov 1 10:38] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	
	
	==> etcd [c351b883f4c7425bf4220670aefd0ab86d65f31b59b246d15d5a0099457dce03] <==
	{"level":"info","ts":"2025-11-01T10:41:40.883052Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:41:40.883065Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:41:40.883333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-11-01T10:41:40.883403Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-11-01T10:41:40.883539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:41:40.883583Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T10:41:40.885786Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:41:40.885868Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-01T10:41:40.885902Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-01T10:41:40.886133Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:41:40.88617Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:41:42.571375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:41:42.571438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:41:42.57146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-01T10:41:42.571478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:41:42.571486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-01T10:41:42.571529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:41:42.571543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-01T10:41:42.573062Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-707467 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:41:42.573065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:41:42.573087Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:41:42.573354Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:41:42.573385Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:41:42.575299Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:41:42.57537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 10:42:42 up  2:25,  0 user,  load average: 4.49, 3.85, 2.51
	Linux old-k8s-version-707467 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [69766ce1ba06cdc04392db038af2182e2d12f992966b11e4498d358ade540d98] <==
	I1101 10:41:44.799820       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:41:44.800075       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 10:41:44.800258       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:41:44.800355       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:41:44.800400       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:41:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:41:45.094349       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:41:45.094387       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:41:45.094397       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:41:45.192820       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:41:45.502443       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:41:45.502472       1 metrics.go:72] Registering metrics
	I1101 10:41:45.502573       1 controller.go:711] "Syncing nftables rules"
	I1101 10:41:55.096622       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:41:55.096660       1 main.go:301] handling current node
	I1101 10:42:05.094295       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:42:05.094344       1 main.go:301] handling current node
	I1101 10:42:15.095115       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:42:15.095185       1 main.go:301] handling current node
	I1101 10:42:25.095997       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:42:25.096037       1 main.go:301] handling current node
	I1101 10:42:35.098574       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 10:42:35.098611       1 main.go:301] handling current node
	
	
	==> kube-apiserver [27186a49df0ceda967ebf7847c9ede3092c812946cd2c021b530c97b5dd0302f] <==
	I1101 10:41:43.616123       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:41:43.662249       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 10:41:43.662703       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 10:41:43.662786       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 10:41:43.662822       1 aggregator.go:166] initial CRD sync complete...
	I1101 10:41:43.662836       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 10:41:43.662843       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:41:43.662849       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:41:43.664031       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 10:41:43.664916       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:41:43.664969       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 10:41:43.664984       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1101 10:41:43.675567       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:41:43.679891       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 10:41:44.480291       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 10:41:44.511172       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 10:41:44.528103       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:41:44.535266       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:41:44.542863       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 10:41:44.567730       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:41:44.584320       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.94.226"}
	I1101 10:41:44.605741       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.231.116"}
	I1101 10:41:56.309189       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 10:41:56.457679       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:41:56.526207       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0e2eee682652453663ca05634fbc994a3a996b9febb53a7bbd8e5ba7558b3a22] <==
	I1101 10:41:56.312940       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1101 10:41:56.315398       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1101 10:41:56.325850       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-d6xpb"
	I1101 10:41:56.329389       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-6vrff"
	I1101 10:41:56.342322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.969831ms"
	I1101 10:41:56.350313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.637808ms"
	I1101 10:41:56.356115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.731178ms"
	I1101 10:41:56.356209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.603µs"
	I1101 10:41:56.363706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.820047ms"
	I1101 10:41:56.363850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.756µs"
	I1101 10:41:56.367928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="109.37µs"
	I1101 10:41:56.379330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.4µs"
	I1101 10:41:56.525885       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:41:56.536954       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1101 10:41:56.589293       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 10:41:56.589436       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 10:42:01.495489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.135158ms"
	I1101 10:42:01.496002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="102.334µs"
	I1101 10:42:03.397221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.483µs"
	I1101 10:42:04.406136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.237µs"
	I1101 10:42:05.408984       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="82.293µs"
	I1101 10:42:18.444759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.191µs"
	I1101 10:42:22.648289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.831634ms"
	I1101 10:42:22.648585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.855µs"
	I1101 10:42:26.653324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.12µs"
	
	
	==> kube-proxy [913b97b4016cb2e7253976bd632a5f8f0aa1b6488a6b0bb1cba4538206af541b] <==
	I1101 10:41:44.726534       1 server_others.go:69] "Using iptables proxy"
	I1101 10:41:44.737735       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1101 10:41:44.756404       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:41:44.758920       1 server_others.go:152] "Using iptables Proxier"
	I1101 10:41:44.758955       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 10:41:44.758961       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 10:41:44.758992       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 10:41:44.759271       1 server.go:846] "Version info" version="v1.28.0"
	I1101 10:41:44.759328       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:41:44.760040       1 config.go:188] "Starting service config controller"
	I1101 10:41:44.760456       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 10:41:44.760338       1 config.go:97] "Starting endpoint slice config controller"
	I1101 10:41:44.760527       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 10:41:44.760368       1 config.go:315] "Starting node config controller"
	I1101 10:41:44.760610       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 10:41:44.861628       1 shared_informer.go:318] Caches are synced for node config
	I1101 10:41:44.861825       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 10:41:44.861911       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [db082c42e2322ac77e4c7ac5029613f4fc315ba2c60b168fd3ad9b50ea598e6a] <==
	I1101 10:41:41.494556       1 serving.go:348] Generated self-signed cert in-memory
	I1101 10:41:43.650362       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 10:41:43.651447       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:41:43.655454       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 10:41:43.655542       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:41:43.655562       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 10:41:43.655547       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 10:41:43.655522       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:41:43.655687       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:41:43.656629       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 10:41:43.656714       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 10:41:43.756141       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:41:43.756154       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1101 10:41:43.756218       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.340832     720 topology_manager.go:215] "Topology Admit Handler" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-6vrff"
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.395046     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssptf\" (UniqueName: \"kubernetes.io/projected/a21016e2-b599-41cb-ba24-a99f52c2ff2b-kube-api-access-ssptf\") pod \"dashboard-metrics-scraper-5f989dc9cf-6vrff\" (UID: \"a21016e2-b599-41cb-ba24-a99f52c2ff2b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff"
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.395297     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhknq\" (UniqueName: \"kubernetes.io/projected/a3296379-d073-4ef5-882d-36bc6b0d6961-kube-api-access-dhknq\") pod \"kubernetes-dashboard-8694d4445c-d6xpb\" (UID: \"a3296379-d073-4ef5-882d-36bc6b0d6961\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d6xpb"
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.395361     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a21016e2-b599-41cb-ba24-a99f52c2ff2b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-6vrff\" (UID: \"a21016e2-b599-41cb-ba24-a99f52c2ff2b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff"
	Nov 01 10:41:56 old-k8s-version-707467 kubelet[720]: I1101 10:41:56.395403     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a3296379-d073-4ef5-882d-36bc6b0d6961-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-d6xpb\" (UID: \"a3296379-d073-4ef5-882d-36bc6b0d6961\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d6xpb"
	Nov 01 10:42:03 old-k8s-version-707467 kubelet[720]: I1101 10:42:03.386106     720 scope.go:117] "RemoveContainer" containerID="c106a72ce8260f7f8651b575237df15eb413cfc78ac90d3da8066f48ffa95086"
	Nov 01 10:42:03 old-k8s-version-707467 kubelet[720]: I1101 10:42:03.397065     720 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-d6xpb" podStartSLOduration=3.58007456 podCreationTimestamp="2025-11-01 10:41:56 +0000 UTC" firstStartedPulling="2025-11-01 10:41:56.686914809 +0000 UTC m=+16.495880372" lastFinishedPulling="2025-11-01 10:42:00.503833189 +0000 UTC m=+20.312798750" observedRunningTime="2025-11-01 10:42:01.421291142 +0000 UTC m=+21.230256739" watchObservedRunningTime="2025-11-01 10:42:03.396992938 +0000 UTC m=+23.205958563"
	Nov 01 10:42:04 old-k8s-version-707467 kubelet[720]: I1101 10:42:04.390772     720 scope.go:117] "RemoveContainer" containerID="c106a72ce8260f7f8651b575237df15eb413cfc78ac90d3da8066f48ffa95086"
	Nov 01 10:42:04 old-k8s-version-707467 kubelet[720]: I1101 10:42:04.390923     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:04 old-k8s-version-707467 kubelet[720]: E1101 10:42:04.391294     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:05 old-k8s-version-707467 kubelet[720]: I1101 10:42:05.395469     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:05 old-k8s-version-707467 kubelet[720]: E1101 10:42:05.396985     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:06 old-k8s-version-707467 kubelet[720]: I1101 10:42:06.643797     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:06 old-k8s-version-707467 kubelet[720]: E1101 10:42:06.644127     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:15 old-k8s-version-707467 kubelet[720]: I1101 10:42:15.421542     720 scope.go:117] "RemoveContainer" containerID="115751d0762eefbe98efde6fb18bf1cb486efe2af3bd390d82cd343eaccc0b56"
	Nov 01 10:42:18 old-k8s-version-707467 kubelet[720]: I1101 10:42:18.288816     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:18 old-k8s-version-707467 kubelet[720]: I1101 10:42:18.432676     720 scope.go:117] "RemoveContainer" containerID="95cf717f839c194aad0c23abd12d9c690ddb60d3a6d5a480ea3be1c7a9910d65"
	Nov 01 10:42:18 old-k8s-version-707467 kubelet[720]: I1101 10:42:18.432948     720 scope.go:117] "RemoveContainer" containerID="89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc"
	Nov 01 10:42:18 old-k8s-version-707467 kubelet[720]: E1101 10:42:18.433319     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:26 old-k8s-version-707467 kubelet[720]: I1101 10:42:26.643387     720 scope.go:117] "RemoveContainer" containerID="89e763df947033c730b11b3e7b26148d6c1f4f185f2534fc14d67e4807c3edfc"
	Nov 01 10:42:26 old-k8s-version-707467 kubelet[720]: E1101 10:42:26.643803     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-6vrff_kubernetes-dashboard(a21016e2-b599-41cb-ba24-a99f52c2ff2b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-6vrff" podUID="a21016e2-b599-41cb-ba24-a99f52c2ff2b"
	Nov 01 10:42:37 old-k8s-version-707467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:42:37 old-k8s-version-707467 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:42:37 old-k8s-version-707467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:42:37 old-k8s-version-707467 systemd[1]: kubelet.service: Consumed 1.618s CPU time.
	
	
	==> kubernetes-dashboard [3ab51e7e8cc6bcb05ed3ab119166fd47bcd81f27d5f66ee5192503bfea0b2f11] <==
	2025/11/01 10:42:00 Starting overwatch
	2025/11/01 10:42:00 Using namespace: kubernetes-dashboard
	2025/11/01 10:42:00 Using in-cluster config to connect to apiserver
	2025/11/01 10:42:00 Using secret token for csrf signing
	2025/11/01 10:42:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:42:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:42:00 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 10:42:00 Generating JWE encryption key
	2025/11/01 10:42:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:42:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:42:00 Initializing JWE encryption key from synchronized object
	2025/11/01 10:42:00 Creating in-cluster Sidecar client
	2025/11/01 10:42:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:42:00 Serving insecurely on HTTP port: 9090
	2025/11/01 10:42:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [115751d0762eefbe98efde6fb18bf1cb486efe2af3bd390d82cd343eaccc0b56] <==
	I1101 10:41:44.684928       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:42:14.689129       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [89bd390fe4bf137c8ab1c83b22c30abeb0ced55dee6477a4b15b0b2ec9274894] <==
	I1101 10:42:15.464670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:42:15.472618       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:42:15.472661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 10:42:32.869185       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:42:32.869342       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-707467_664555b9-0c5f-42fd-9371-aa4049299cfc!
	I1101 10:42:32.869311       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"97b1eecb-ad8d-49bf-af88-e6407fe47b1a", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-707467_664555b9-0c5f-42fd-9371-aa4049299cfc became leader
	I1101 10:42:32.969613       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-707467_664555b9-0c5f-42fd-9371-aa4049299cfc!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-707467 -n old-k8s-version-707467
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-707467 -n old-k8s-version-707467: exit status 2 (368.351628ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-707467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.945766ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:42:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-433711 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-433711 describe deploy/metrics-server -n kube-system: exit status 1 (72.917553ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-433711 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-433711
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-433711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2",
	        "Created": "2025-11-01T10:41:33.057243261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 359281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:41:33.095826901Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2-json.log",
	        "Name": "/default-k8s-diff-port-433711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-433711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-433711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2",
	                "LowerDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-433711",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-433711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-433711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-433711",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-433711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cff331a6936c38c4c63e4fe74888bfd5d90ee1bdb2d7c451d7be0a99cfa6e41",
	            "SandboxKey": "/var/run/docker/netns/5cff331a6936",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-433711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:f6:99:4f:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0395ef9fed2dfe179301e2f7acf97030a23523642ea4cc41f18d2b39a90a95e0",
	                    "EndpointID": "ff586af51018f9d390ace2e3f9d455fc78974b8e160f78920979112ffe9d071c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-433711",
	                        "b9f86e35d4b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-433711 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-299863 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ ssh     │ -p custom-flannel-299863 sudo crio config                                                                                                                                                                                                     │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p custom-flannel-299863                                                                                                                                                                                                                      │ custom-flannel-299863        │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ delete  │ -p disable-driver-mounts-339061                                                                                                                                                                                                               │ disable-driver-mounts-339061 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-707467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p no-preload-753486 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p embed-certs-071527 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p no-preload-753486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:42:44
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:42:44.539930  375513 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:42:44.540208  375513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:44.540219  375513 out.go:374] Setting ErrFile to fd 2...
	I1101 10:42:44.540235  375513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:42:44.540473  375513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:42:44.540966  375513 out.go:368] Setting JSON to false
	I1101 10:42:44.542194  375513 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8704,"bootTime":1761985060,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:42:44.542291  375513 start.go:143] virtualization: kvm guest
	I1101 10:42:44.544360  375513 out.go:179] * [newest-cni-336923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:42:44.545977  375513 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:42:44.546006  375513 notify.go:221] Checking for updates...
	I1101 10:42:44.548247  375513 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:42:44.549380  375513 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:42:44.550528  375513 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:42:44.551588  375513 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:42:44.552606  375513 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:42:44.554157  375513 config.go:182] Loaded profile config "default-k8s-diff-port-433711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:44.554319  375513 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:42:44.554470  375513 config.go:182] Loaded profile config "old-k8s-version-707467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:42:44.554591  375513 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:42:44.579038  375513 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:42:44.579187  375513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:44.637937  375513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:42:44.628352369 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:44.638043  375513 docker.go:319] overlay module found
	I1101 10:42:44.639691  375513 out.go:179] * Using the docker driver based on user configuration
	I1101 10:42:44.640670  375513 start.go:309] selected driver: docker
	I1101 10:42:44.640684  375513 start.go:930] validating driver "docker" against <nil>
	I1101 10:42:44.640695  375513 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:42:44.641221  375513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:42:44.697378  375513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:42:44.686747589 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:42:44.697586  375513 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 10:42:44.697620  375513 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 10:42:44.697892  375513 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:42:44.699916  375513 out.go:179] * Using Docker driver with root privileges
	I1101 10:42:44.700875  375513 cni.go:84] Creating CNI manager for ""
	I1101 10:42:44.700957  375513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:42:44.700969  375513 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 10:42:44.701039  375513 start.go:353] cluster config:
	{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:42:44.702024  375513 out.go:179] * Starting "newest-cni-336923" primary control-plane node in "newest-cni-336923" cluster
	I1101 10:42:44.702891  375513 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:42:44.703924  375513 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:42:44.704901  375513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:44.704963  375513 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:42:44.704973  375513 cache.go:59] Caching tarball of preloaded images
	I1101 10:42:44.705001  375513 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:42:44.705084  375513 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:42:44.705104  375513 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:42:44.705221  375513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json ...
	I1101 10:42:44.705247  375513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json: {Name:mk51267a09313261510c4ec85af8c7cacfa1ab0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:42:44.725258  375513 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:42:44.725286  375513 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:42:44.725303  375513 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:42:44.725339  375513 start.go:360] acquireMachinesLock for newest-cni-336923: {Name:mk078b1ded54eaee8a26288c21e4405f07864b1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:42:44.725435  375513 start.go:364] duration metric: took 76.251µs to acquireMachinesLock for "newest-cni-336923"
	I1101 10:42:44.725458  375513 start.go:93] Provisioning new machine with config: &{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:42:44.725556  375513 start.go:125] createHost starting for "" (driver="docker")
	W1101 10:42:44.035427  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	W1101 10:42:46.036256  368496 pod_ready.go:104] pod "coredns-66bc5c9577-c5td8" is not "Ready", error: <nil>
	I1101 10:42:44.727893  375513 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 10:42:44.728200  375513 start.go:159] libmachine.API.Create for "newest-cni-336923" (driver="docker")
	I1101 10:42:44.728241  375513 client.go:173] LocalClient.Create starting
	I1101 10:42:44.728336  375513 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem
	I1101 10:42:44.728380  375513 main.go:143] libmachine: Decoding PEM data...
	I1101 10:42:44.728400  375513 main.go:143] libmachine: Parsing certificate...
	I1101 10:42:44.728477  375513 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem
	I1101 10:42:44.728539  375513 main.go:143] libmachine: Decoding PEM data...
	I1101 10:42:44.728572  375513 main.go:143] libmachine: Parsing certificate...
	I1101 10:42:44.729004  375513 cli_runner.go:164] Run: docker network inspect newest-cni-336923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 10:42:44.746183  375513 cli_runner.go:211] docker network inspect newest-cni-336923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 10:42:44.746250  375513 network_create.go:284] running [docker network inspect newest-cni-336923] to gather additional debugging logs...
	I1101 10:42:44.746265  375513 cli_runner.go:164] Run: docker network inspect newest-cni-336923
	W1101 10:42:44.762770  375513 cli_runner.go:211] docker network inspect newest-cni-336923 returned with exit code 1
	I1101 10:42:44.762804  375513 network_create.go:287] error running [docker network inspect newest-cni-336923]: docker network inspect newest-cni-336923: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-336923 not found
	I1101 10:42:44.762817  375513 network_create.go:289] output of [docker network inspect newest-cni-336923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-336923 not found
	
	** /stderr **
	I1101 10:42:44.762931  375513 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:42:44.780770  375513 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ac7093b735a5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:19:58:44:be:58} reservation:<nil>}
	I1101 10:42:44.781296  375513 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2c03ebffc507 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:1e:41:85:56:13:7f} reservation:<nil>}
	I1101 10:42:44.782064  375513 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-abee7b1ad47f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:16:f3:16:31:10:75} reservation:<nil>}
	I1101 10:42:44.782701  375513 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0395ef9fed2d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:34:9e:39:13:a7} reservation:<nil>}
	I1101 10:42:44.783565  375513 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e3b9f0}
	I1101 10:42:44.783588  375513 network_create.go:124] attempt to create docker network newest-cni-336923 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1101 10:42:44.783637  375513 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-336923 newest-cni-336923
	I1101 10:42:44.846527  375513 network_create.go:108] docker network newest-cni-336923 192.168.85.0/24 created
	I1101 10:42:44.846565  375513 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-336923" container
	I1101 10:42:44.846631  375513 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 10:42:44.863779  375513 cli_runner.go:164] Run: docker volume create newest-cni-336923 --label name.minikube.sigs.k8s.io=newest-cni-336923 --label created_by.minikube.sigs.k8s.io=true
	I1101 10:42:44.882841  375513 oci.go:103] Successfully created a docker volume newest-cni-336923
	I1101 10:42:44.882921  375513 cli_runner.go:164] Run: docker run --rm --name newest-cni-336923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-336923 --entrypoint /usr/bin/test -v newest-cni-336923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 10:42:45.903551  375513 cli_runner.go:217] Completed: docker run --rm --name newest-cni-336923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-336923 --entrypoint /usr/bin/test -v newest-cni-336923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.02054617s)
	I1101 10:42:45.903582  375513 oci.go:107] Successfully prepared a docker volume newest-cni-336923
	I1101 10:42:45.903628  375513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:42:45.903657  375513 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 10:42:45.903734  375513 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-336923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 01 10:42:37 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:37.984380264Z" level=info msg="Starting container: 6922e6365229f04324df089bfa03f8fa6b37f6be9e48a63bb9a0cdcd213493e3" id=0bf9a145-84e6-418d-9538-eaaad1c3b12f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:37 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:37.98632508Z" level=info msg="Started container" PID=1836 containerID=6922e6365229f04324df089bfa03f8fa6b37f6be9e48a63bb9a0cdcd213493e3 description=kube-system/coredns-66bc5c9577-v7tvt/coredns id=0bf9a145-84e6-418d-9538-eaaad1c3b12f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c08d9d28aca9080f449b2817c7d372e512e8dfbb3900d4029b1f0c13feaac9e
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.92171334Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1fa3402e-f3c1-4159-bfdd-88c14738113f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.921828664Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.932977798Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:803022ab689eefdd53ea5a018f2e7afb579305284c5b4d7a2fd064aaf2aa04f6 UID:59420294-cb51-4139-83a6-0ab57cb66dde NetNS:/var/run/netns/707ed4fb-3119-4127-8415-775bf8fec0f8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00098a498}] Aliases:map[]}"
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.933017758Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.94566865Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:803022ab689eefdd53ea5a018f2e7afb579305284c5b4d7a2fd064aaf2aa04f6 UID:59420294-cb51-4139-83a6-0ab57cb66dde NetNS:/var/run/netns/707ed4fb-3119-4127-8415-775bf8fec0f8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00098a498}] Aliases:map[]}"
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.945826711Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.946670056Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.947531423Z" level=info msg="Ran pod sandbox 803022ab689eefdd53ea5a018f2e7afb579305284c5b4d7a2fd064aaf2aa04f6 with infra container: default/busybox/POD" id=1fa3402e-f3c1-4159-bfdd-88c14738113f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.948917387Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=af062253-5a2b-42a7-b252-d25731337042 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.94923482Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=af062253-5a2b-42a7-b252-d25731337042 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.949312512Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=af062253-5a2b-42a7-b252-d25731337042 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.950304163Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c0ec908a-e659-4170-bf6b-45d809046e59 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:42:40 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:40.954543484Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.103869737Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c0ec908a-e659-4170-bf6b-45d809046e59 name=/runtime.v1.ImageService/PullImage
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.104700794Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=11f66241-1f91-4b61-8f60-103324903bc5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.106029295Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6f872359-c97e-4bb4-bb62-dac187cbe1da name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.109449149Z" level=info msg="Creating container: default/busybox/busybox" id=7f09a443-e6f4-4ac4-98ce-63bcdadc12e9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.109603489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.113997082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.114425139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.141795129Z" level=info msg="Created container b40967a1a0aa1c26d47fec257e08b5bb7b36b0968dcaf3f98ebcb4345463714b: default/busybox/busybox" id=7f09a443-e6f4-4ac4-98ce-63bcdadc12e9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.142872878Z" level=info msg="Starting container: b40967a1a0aa1c26d47fec257e08b5bb7b36b0968dcaf3f98ebcb4345463714b" id=37dc6444-55a0-4def-af87-cd197a66fd43 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:43 default-k8s-diff-port-433711 crio[775]: time="2025-11-01T10:42:43.145041127Z" level=info msg="Started container" PID=1910 containerID=b40967a1a0aa1c26d47fec257e08b5bb7b36b0968dcaf3f98ebcb4345463714b description=default/busybox/busybox id=37dc6444-55a0-4def-af87-cd197a66fd43 name=/runtime.v1.RuntimeService/StartContainer sandboxID=803022ab689eefdd53ea5a018f2e7afb579305284c5b4d7a2fd064aaf2aa04f6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	b40967a1a0aa1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   803022ab689ee       busybox                                                default
	6922e6365229f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago       Running             coredns                   0                   0c08d9d28aca9       coredns-66bc5c9577-v7tvt                               kube-system
	676f790a7405f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago       Running             storage-provisioner       0                   d6e381310bff6       storage-provisioner                                    kube-system
	7f468625ff38d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      54 seconds ago       Running             kindnet-cni               0                   80fe2e2852573       kindnet-f2zwl                                          kube-system
	ecf20a0d93e64       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      54 seconds ago       Running             kube-proxy                0                   629f7e4495649       kube-proxy-2g94q                                       kube-system
	4b1bc3ee8245d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   252588bc68856       kube-controller-manager-default-k8s-diff-port-433711   kube-system
	1423b07529cc9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   f35133d5df39d       kube-scheduler-default-k8s-diff-port-433711            kube-system
	0e3ae66ac8799       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   c7b6cdff454f2       kube-apiserver-default-k8s-diff-port-433711            kube-system
	408281958c1b1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   0c6e0804d20fb       etcd-default-k8s-diff-port-433711                      kube-system
	
	
	==> coredns [6922e6365229f04324df089bfa03f8fa6b37f6be9e48a63bb9a0cdcd213493e3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59192 - 48752 "HINFO IN 5055109137056715778.1669489395674335846. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030967916s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-433711
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-433711
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=default-k8s-diff-port-433711
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-433711
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:42:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:42:51 +0000   Sat, 01 Nov 2025 10:41:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:42:51 +0000   Sat, 01 Nov 2025 10:41:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:42:51 +0000   Sat, 01 Nov 2025 10:41:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:42:51 +0000   Sat, 01 Nov 2025 10:42:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-433711
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e1d5f657-b6a1-42bf-b6a8-18744a9a0476
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-v7tvt                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     56s
	  kube-system                 etcd-default-k8s-diff-port-433711                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         63s
	  kube-system                 kindnet-f2zwl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-433711             250m (3%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-433711    200m (2%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-2g94q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-433711             100m (1%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 67s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x8 over 66s)  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientPID
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s                kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s                kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s                kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node default-k8s-diff-port-433711 event: Registered Node default-k8s-diff-port-433711 in Controller
	  Normal  NodeReady                15s                kubelet          Node default-k8s-diff-port-433711 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	[Nov 1 10:42] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [408281958c1b1ab69e007bdc1ad09920b60bc07f88ded62c20fb86b6022cf60d] <==
	{"level":"warn","ts":"2025-11-01T10:41:47.686734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.692914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.700343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.708370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.715735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.723805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.730880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.737400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.743900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.750172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.756574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.763607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.770189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.779671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.787029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.794867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.802398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.808616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.830024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.836162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:41:47.849854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:50.274031Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.768292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:42:50.274138Z","caller":"traceutil/trace.go:172","msg":"trace[1865883452] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations; range_end:; response_count:0; response_revision:482; }","duration":"188.939311ms","start":"2025-11-01T10:42:50.085181Z","end":"2025-11-01T10:42:50.274120Z","steps":["trace[1865883452] 'agreement among raft nodes before linearized reading'  (duration: 61.950273ms)","trace[1865883452] 'range keys from in-memory index tree'  (duration: 126.785663ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:42:50.274661Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.875731ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356351502797154 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:480 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:42:50.274748Z","caller":"traceutil/trace.go:172","msg":"trace[837849856] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"208.052246ms","start":"2025-11-01T10:42:50.066681Z","end":"2025-11-01T10:42:50.274733Z","steps":["trace[837849856] 'process raft request'  (duration: 80.534414ms)","trace[837849856] 'compare'  (duration: 126.766628ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:42:52 up  2:25,  0 user,  load average: 4.84, 3.95, 2.55
	Linux default-k8s-diff-port-433711 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f468625ff38dc2827d29c935dc21149dc6a183180a331e5309bf0af37908a41] <==
	I1101 10:41:57.242772       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:41:57.243054       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:41:57.243205       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:41:57.243222       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:41:57.243248       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:41:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:41:57.445036       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:41:57.445468       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:41:57.445565       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:41:57.446010       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 10:42:27.447056       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1101 10:42:27.447056       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1101 10:42:27.447115       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1101 10:42:27.447173       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1101 10:42:28.645869       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:42:28.645900       1 metrics.go:72] Registering metrics
	I1101 10:42:28.646011       1 controller.go:711] "Syncing nftables rules"
	I1101 10:42:37.449637       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:42:37.449693       1 main.go:301] handling current node
	I1101 10:42:47.448100       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:42:47.448132       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0e3ae66ac8799eb1ada97c5dcb6e73e63e2889b8c35543fef39f8817a2d8ed3b] <==
	E1101 10:41:48.487039       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1101 10:41:48.492228       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:41:48.496385       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:48.496584       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:41:48.502293       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:48.503072       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:41:48.690722       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:41:49.295720       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:41:49.299273       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:41:49.299293       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:41:49.729292       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:41:49.763658       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:41:49.798561       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:41:49.804843       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 10:41:49.805727       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:41:49.809578       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:41:50.679452       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:41:50.728516       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:41:50.738471       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:41:50.746575       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:41:56.486412       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:41:56.652851       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:56.664137       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:41:56.688965       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 10:42:50.715872       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:40306: use of closed network connection
	
	
	==> kube-controller-manager [4b1bc3ee8245dfa63fa55bc4b59c10338f4f17c856a61e62c18e8c175e731699] <==
	I1101 10:41:55.682250       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:41:55.683478       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:41:55.683519       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:41:55.683542       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:41:55.683560       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:41:55.683608       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:41:55.683619       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:41:55.683626       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:41:55.683747       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:41:55.684727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:41:55.686996       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:41:55.687347       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:41:55.692371       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-433711" podCIDRs=["10.244.0.0/24"]
	I1101 10:41:55.692419       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:41:55.697661       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:41:55.698045       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:41:55.698132       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-433711"
	I1101 10:41:55.698184       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 10:41:55.704139       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:41:55.708727       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:41:55.719064       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:41:55.728081       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:41:55.728099       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:41:55.728107       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:42:40.706142       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ecf20a0d93e64ad18bda0002656fd8fc76c872ab9ccf4765535e4ee90b80aa38] <==
	I1101 10:41:57.108232       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:41:57.180662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:41:57.281324       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:41:57.281369       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:41:57.281488       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:41:57.300245       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:41:57.300294       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:41:57.305463       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:41:57.305818       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:41:57.305844       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:41:57.307090       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:41:57.307116       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:41:57.307156       1 config.go:200] "Starting service config controller"
	I1101 10:41:57.307162       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:41:57.307290       1 config.go:309] "Starting node config controller"
	I1101 10:41:57.307679       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:41:57.307699       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:41:57.307707       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:41:57.307713       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:41:57.407821       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:41:57.407827       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:41:57.408977       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1423b07529cc935dc97be8f51a91df813bd47c06e14f937c50f9bb5c9c57ab07] <==
	E1101 10:41:48.347698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:41:48.347813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:41:48.347889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:41:48.347909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:41:48.347958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:41:48.348003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:41:48.348024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:41:48.347997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:41:48.348130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:41:48.348161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:41:48.348349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:41:48.348367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:41:48.348408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:41:48.348445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:41:48.349012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:41:49.308115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:41:49.332580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:41:49.415078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:41:49.425260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:41:49.478747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:41:49.483711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:41:49.496775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:41:49.527863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:41:49.723420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 10:41:52.138760       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:41:51 default-k8s-diff-port-433711 kubelet[1303]: E1101 10:41:51.576368    1303 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-433711\" already exists" pod="kube-system/etcd-default-k8s-diff-port-433711"
	Nov 01 10:41:51 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:51.588350    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-433711" podStartSLOduration=1.588333473 podStartE2EDuration="1.588333473s" podCreationTimestamp="2025-11-01 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:51.588284272 +0000 UTC m=+1.123659617" watchObservedRunningTime="2025-11-01 10:41:51.588333473 +0000 UTC m=+1.123708807"
	Nov 01 10:41:51 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:51.608149    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-433711" podStartSLOduration=1.608128623 podStartE2EDuration="1.608128623s" podCreationTimestamp="2025-11-01 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:51.598890568 +0000 UTC m=+1.134265918" watchObservedRunningTime="2025-11-01 10:41:51.608128623 +0000 UTC m=+1.143503966"
	Nov 01 10:41:51 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:51.608270    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-433711" podStartSLOduration=2.608262861 podStartE2EDuration="2.608262861s" podCreationTimestamp="2025-11-01 10:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:51.608014669 +0000 UTC m=+1.143390011" watchObservedRunningTime="2025-11-01 10:41:51.608262861 +0000 UTC m=+1.143638206"
	Nov 01 10:41:51 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:51.616265    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-433711" podStartSLOduration=1.616249774 podStartE2EDuration="1.616249774s" podCreationTimestamp="2025-11-01 10:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:51.615989459 +0000 UTC m=+1.151364802" watchObservedRunningTime="2025-11-01 10:41:51.616249774 +0000 UTC m=+1.151625117"
	Nov 01 10:41:55 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:55.784680    1303 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:41:55 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:55.785482    1303 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:41:56 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:56.769870    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/750d06bb-d295-4d98-b8e4-71984b10453c-xtables-lock\") pod \"kindnet-f2zwl\" (UID: \"750d06bb-d295-4d98-b8e4-71984b10453c\") " pod="kube-system/kindnet-f2zwl"
	Nov 01 10:41:56 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:56.769925    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9w6\" (UniqueName: \"kubernetes.io/projected/750d06bb-d295-4d98-b8e4-71984b10453c-kube-api-access-hm9w6\") pod \"kindnet-f2zwl\" (UID: \"750d06bb-d295-4d98-b8e4-71984b10453c\") " pod="kube-system/kindnet-f2zwl"
	Nov 01 10:41:56 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:56.769960    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18217a2b-fb40-4fb2-9674-0194a9462c32-lib-modules\") pod \"kube-proxy-2g94q\" (UID: \"18217a2b-fb40-4fb2-9674-0194a9462c32\") " pod="kube-system/kube-proxy-2g94q"
	Nov 01 10:41:56 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:56.769981    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/750d06bb-d295-4d98-b8e4-71984b10453c-cni-cfg\") pod \"kindnet-f2zwl\" (UID: \"750d06bb-d295-4d98-b8e4-71984b10453c\") " pod="kube-system/kindnet-f2zwl"
	Nov 01 10:41:56 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:56.770034    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/18217a2b-fb40-4fb2-9674-0194a9462c32-kube-proxy\") pod \"kube-proxy-2g94q\" (UID: \"18217a2b-fb40-4fb2-9674-0194a9462c32\") " pod="kube-system/kube-proxy-2g94q"
	Nov 01 10:41:56 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:56.770078    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18217a2b-fb40-4fb2-9674-0194a9462c32-xtables-lock\") pod \"kube-proxy-2g94q\" (UID: \"18217a2b-fb40-4fb2-9674-0194a9462c32\") " pod="kube-system/kube-proxy-2g94q"
	Nov 01 10:41:56 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:56.770108    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2twz\" (UniqueName: \"kubernetes.io/projected/18217a2b-fb40-4fb2-9674-0194a9462c32-kube-api-access-r2twz\") pod \"kube-proxy-2g94q\" (UID: \"18217a2b-fb40-4fb2-9674-0194a9462c32\") " pod="kube-system/kube-proxy-2g94q"
	Nov 01 10:41:56 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:56.770128    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/750d06bb-d295-4d98-b8e4-71984b10453c-lib-modules\") pod \"kindnet-f2zwl\" (UID: \"750d06bb-d295-4d98-b8e4-71984b10453c\") " pod="kube-system/kindnet-f2zwl"
	Nov 01 10:41:57 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:57.590783    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-f2zwl" podStartSLOduration=1.59076489 podStartE2EDuration="1.59076489s" podCreationTimestamp="2025-11-01 10:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:57.590659206 +0000 UTC m=+7.126034548" watchObservedRunningTime="2025-11-01 10:41:57.59076489 +0000 UTC m=+7.126140233"
	Nov 01 10:41:57 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:41:57.600118    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2g94q" podStartSLOduration=1.6001012669999999 podStartE2EDuration="1.600101267s" podCreationTimestamp="2025-11-01 10:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:41:57.599968824 +0000 UTC m=+7.135344169" watchObservedRunningTime="2025-11-01 10:41:57.600101267 +0000 UTC m=+7.135476610"
	Nov 01 10:42:37 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:42:37.595961    1303 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 10:42:37 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:42:37.674943    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/93198445-c661-4c14-bb6f-2e13eb9c10ea-tmp\") pod \"storage-provisioner\" (UID: \"93198445-c661-4c14-bb6f-2e13eb9c10ea\") " pod="kube-system/storage-provisioner"
	Nov 01 10:42:37 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:42:37.675011    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc75l\" (UniqueName: \"kubernetes.io/projected/a952ead8-9f44-4ac5-8145-2a76d6bc46a3-kube-api-access-bc75l\") pod \"coredns-66bc5c9577-v7tvt\" (UID: \"a952ead8-9f44-4ac5-8145-2a76d6bc46a3\") " pod="kube-system/coredns-66bc5c9577-v7tvt"
	Nov 01 10:42:37 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:42:37.675108    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npm55\" (UniqueName: \"kubernetes.io/projected/93198445-c661-4c14-bb6f-2e13eb9c10ea-kube-api-access-npm55\") pod \"storage-provisioner\" (UID: \"93198445-c661-4c14-bb6f-2e13eb9c10ea\") " pod="kube-system/storage-provisioner"
	Nov 01 10:42:37 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:42:37.675158    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a952ead8-9f44-4ac5-8145-2a76d6bc46a3-config-volume\") pod \"coredns-66bc5c9577-v7tvt\" (UID: \"a952ead8-9f44-4ac5-8145-2a76d6bc46a3\") " pod="kube-system/coredns-66bc5c9577-v7tvt"
	Nov 01 10:42:38 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:42:38.699739    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.699718804 podStartE2EDuration="42.699718804s" podCreationTimestamp="2025-11-01 10:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:42:38.699354659 +0000 UTC m=+48.234730004" watchObservedRunningTime="2025-11-01 10:42:38.699718804 +0000 UTC m=+48.235094153"
	Nov 01 10:42:38 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:42:38.699853    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v7tvt" podStartSLOduration=42.699844887 podStartE2EDuration="42.699844887s" podCreationTimestamp="2025-11-01 10:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:42:38.68932335 +0000 UTC m=+48.224698695" watchObservedRunningTime="2025-11-01 10:42:38.699844887 +0000 UTC m=+48.235220230"
	Nov 01 10:42:40 default-k8s-diff-port-433711 kubelet[1303]: I1101 10:42:40.696135    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vj6l\" (UniqueName: \"kubernetes.io/projected/59420294-cb51-4139-83a6-0ab57cb66dde-kube-api-access-5vj6l\") pod \"busybox\" (UID: \"59420294-cb51-4139-83a6-0ab57cb66dde\") " pod="default/busybox"
	
	
	==> storage-provisioner [676f790a7405f8c924b7dea78db260774545f5a51bdfc68b154ca1f5a8ff9b10] <==
	I1101 10:42:37.993387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:42:38.002353       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:42:38.002388       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:42:38.004614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:38.009369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:42:38.009570       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:42:38.009681       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"177ac40d-31f6-48f5-be20-6d54b17caa55", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-433711_9e609ecc-ced0-45b8-96d7-1a43dfe1a7d7 became leader
	I1101 10:42:38.009764       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433711_9e609ecc-ced0-45b8-96d7-1a43dfe1a7d7!
	W1101 10:42:38.011635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:38.015250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:42:38.110292       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433711_9e609ecc-ced0-45b8-96d7-1a43dfe1a7d7!
	W1101 10:42:40.018733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:40.023020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:42.026029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:42.030547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:44.034390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:44.038783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:46.044425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:46.049302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:48.053017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:48.061415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:50.064579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:50.275734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:52.278629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:52.282814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-433711 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (277.08653ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-336923
E1101 10:43:09.190343   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:243: (dbg) docker inspect newest-cni-336923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb",
	        "Created": "2025-11-01T10:42:50.393754457Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 376388,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:42:50.424947264Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/hosts",
	        "LogPath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb-json.log",
	        "Name": "/newest-cni-336923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-336923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-336923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb",
	                "LowerDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-336923",
	                "Source": "/var/lib/docker/volumes/newest-cni-336923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-336923",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-336923",
	                "name.minikube.sigs.k8s.io": "newest-cni-336923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57afa6bb69e2262c4c7dd228c9f34e68858cb4595fa006b29386c390663130f9",
	            "SandboxKey": "/var/run/docker/netns/57afa6bb69e2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-336923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:b3:f4:f5:80:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e144a937d06e262f0e3ad8a76371e64c3d6dd9439eb433489836f813e4181b37",
	                    "EndpointID": "ad32bf2258ce3e3bc2c8110141c4844266bde79755c8a37f4a5e7d0cb7dbc296",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-336923",
	                        "f7f97f7d0c24"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336923 -n newest-cni-336923
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-336923 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-336923 logs -n 25: (1.022808765s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-707467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p no-preload-753486 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p embed-certs-071527 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p no-preload-753486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433711 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-433711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:43:09
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:43:09.433764  380170 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:09.434067  380170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:09.434078  380170 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:09.434083  380170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:09.434282  380170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:43:09.434759  380170 out.go:368] Setting JSON to false
	I1101 10:43:09.435884  380170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8729,"bootTime":1761985060,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:43:09.435980  380170 start.go:143] virtualization: kvm guest
	I1101 10:43:09.437888  380170 out.go:179] * [default-k8s-diff-port-433711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:43:09.438945  380170 notify.go:221] Checking for updates...
	I1101 10:43:09.438964  380170 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:43:09.440132  380170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:43:09.441564  380170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:09.442699  380170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:43:09.443774  380170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:43:09.444780  380170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:43:09.449351  380170 config.go:182] Loaded profile config "default-k8s-diff-port-433711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:09.449826  380170 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:43:09.475735  380170 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:43:09.475843  380170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:09.543642  380170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 10:43:09.533166887 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:09.543789  380170 docker.go:319] overlay module found
	I1101 10:43:09.546342  380170 out.go:179] * Using the docker driver based on existing profile
	I1101 10:43:09.547437  380170 start.go:309] selected driver: docker
	I1101 10:43:09.547455  380170 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-433711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-433711 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:09.547565  380170 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:43:09.548143  380170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:09.609326  380170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 10:43:09.599016767 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:09.609629  380170 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:43:09.609661  380170 cni.go:84] Creating CNI manager for ""
	I1101 10:43:09.609718  380170 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:09.609756  380170 start.go:353] cluster config:
	{Name:default-k8s-diff-port-433711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-433711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:09.611514  380170 out.go:179] * Starting "default-k8s-diff-port-433711" primary control-plane node in "default-k8s-diff-port-433711" cluster
	I1101 10:43:09.612576  380170 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:43:09.613707  380170 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	
	
	==> CRI-O <==
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.411895789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.414230297Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=88cb5ce0-4f28-49cb-82f0-fd7d68d6c53d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.416173085Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.417689343Z" level=info msg="Ran pod sandbox f3ea9900af9f20ca03e0aa734c2d8441c59ab466f28f00ed6fa66ed25ee08823 with infra container: kube-system/kindnet-6lbk4/POD" id=88cb5ce0-4f28-49cb-82f0-fd7d68d6c53d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.419886629Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=33334ca4-00fc-40fd-8eb7-d88da9af9978 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.421586759Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a0a3b503-44f4-4160-8741-9b2fbfeae2e8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.422649378Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1637b647-7026-40c9-9871-d928adc23449 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.423555524Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.424480099Z" level=info msg="Ran pod sandbox 98f152ae13cac702f8bc6093be9feeafaf83984974b926f8d9b0dd76a1bcefeb with infra container: kube-system/kube-proxy-z65pd/POD" id=a0a3b503-44f4-4160-8741-9b2fbfeae2e8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.425576335Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ae4e9b15-bb74-4d35-8c20-06e65cfe1b3c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.426430433Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=db908b79-f588-4979-b06e-4a5d17d760da name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.426884463Z" level=info msg="Creating container: kube-system/kindnet-6lbk4/kindnet-cni" id=6cd46339-67f0-41a0-ab5d-e43230a21a7e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.426990342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.429016876Z" level=info msg="Creating container: kube-system/kube-proxy-z65pd/kube-proxy" id=5645735b-e2fc-4db1-a33c-6029b2649f51 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.429128625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.432518508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.432942691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.435455707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.436059844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.459803058Z" level=info msg="Created container 0439712aa2083854a112b6708a99fffd4c29f1daec3566ef3a7e20e9b60d9e29: kube-system/kindnet-6lbk4/kindnet-cni" id=6cd46339-67f0-41a0-ab5d-e43230a21a7e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.460484018Z" level=info msg="Starting container: 0439712aa2083854a112b6708a99fffd4c29f1daec3566ef3a7e20e9b60d9e29" id=eb6068b3-a4e6-4bdc-b469-a3948ac8b9a3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.462445975Z" level=info msg="Started container" PID=1592 containerID=0439712aa2083854a112b6708a99fffd4c29f1daec3566ef3a7e20e9b60d9e29 description=kube-system/kindnet-6lbk4/kindnet-cni id=eb6068b3-a4e6-4bdc-b469-a3948ac8b9a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3ea9900af9f20ca03e0aa734c2d8441c59ab466f28f00ed6fa66ed25ee08823
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.463424274Z" level=info msg="Created container 7fe0e7aa668cdf7c54b9c84ae06c98e15421d9af81c904290fb9e8cd10ab8cfc: kube-system/kube-proxy-z65pd/kube-proxy" id=5645735b-e2fc-4db1-a33c-6029b2649f51 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.46412553Z" level=info msg="Starting container: 7fe0e7aa668cdf7c54b9c84ae06c98e15421d9af81c904290fb9e8cd10ab8cfc" id=e5ec54fe-877e-4c62-9c8c-5ff8c63bb583 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:08 newest-cni-336923 crio[773]: time="2025-11-01T10:43:08.467105551Z" level=info msg="Started container" PID=1593 containerID=7fe0e7aa668cdf7c54b9c84ae06c98e15421d9af81c904290fb9e8cd10ab8cfc description=kube-system/kube-proxy-z65pd/kube-proxy id=e5ec54fe-877e-4c62-9c8c-5ff8c63bb583 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98f152ae13cac702f8bc6093be9feeafaf83984974b926f8d9b0dd76a1bcefeb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7fe0e7aa668cd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   98f152ae13cac       kube-proxy-z65pd                            kube-system
	0439712aa2083       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   f3ea9900af9f2       kindnet-6lbk4                               kube-system
	085ef635ae00e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   f54802e80fae1       etcd-newest-cni-336923                      kube-system
	a181c5fa33948       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   91cb0f3184b98       kube-apiserver-newest-cni-336923            kube-system
	9245656218d5a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   7f9ff250a5e0e       kube-controller-manager-newest-cni-336923   kube-system
	95f451e526aba       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   ff1ba62c94e01       kube-scheduler-newest-cni-336923            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-336923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-336923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=newest-cni-336923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_43_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:43:00 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-336923
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:43:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:43:02 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:43:02 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:43:02 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:43:02 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-336923
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6f793bc2-07ee-4607-b191-dc232242ea47
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-336923                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-6lbk4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-336923             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-336923    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-z65pd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-336923             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-336923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-336923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-336923 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-336923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-336923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-336923 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-336923 event: Registered Node newest-cni-336923 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	[Nov 1 10:42] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [085ef635ae00e4469f50dd662a3bca33f712bbd6937ce52ae39548d29e472c17] <==
	{"level":"warn","ts":"2025-11-01T10:42:59.363306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.370274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.387232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.394092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.401082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.406999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.413143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.419102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.426260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.438689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.445551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.454070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.461139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.468034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.473972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.480351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.487570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.493685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.501529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.508632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.515835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.540985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.548155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.555392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:59.607436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35152","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:43:10 up  2:25,  0 user,  load average: 4.48, 3.93, 2.58
	Linux newest-cni-336923 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0439712aa2083854a112b6708a99fffd4c29f1daec3566ef3a7e20e9b60d9e29] <==
	I1101 10:43:08.618240       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:43:08.618524       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:43:08.618693       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:43:08.618709       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:43:08.618731       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:43:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:43:08.915242       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:43:08.915806       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:43:08.915879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:43:08.916100       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:43:09.311068       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:43:09.311107       1 metrics.go:72] Registering metrics
	I1101 10:43:09.311240       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a181c5fa339484e1528932b0380127a2b0e925242cd0e520b469ec353860b3e4] <==
	I1101 10:43:00.094590       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:43:00.094596       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:43:00.096074       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:43:00.098521       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:43:00.099334       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 10:43:00.107468       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:43:00.109704       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:43:00.118716       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:43:00.999536       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 10:43:01.004188       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 10:43:01.004208       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:43:01.493762       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:43:01.536248       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:43:01.604025       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 10:43:01.610015       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 10:43:01.611082       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:43:01.615230       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:43:02.022668       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:43:02.485534       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:43:02.495860       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 10:43:02.504719       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:43:07.726456       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:43:07.776963       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:43:07.780951       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:43:08.075883       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9245656218d5a87ae94269ac4d1d177ac85c9eecf91a33797e112bc04c470609] <==
	I1101 10:43:07.020914       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:43:07.020925       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:43:07.020930       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:43:07.022234       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:43:07.022247       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:43:07.022296       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:43:07.022368       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:43:07.022290       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:43:07.022407       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:43:07.022404       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:43:07.022419       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:43:07.022635       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:43:07.022753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:43:07.023071       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:43:07.024411       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:43:07.025881       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:43:07.026961       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:43:07.026962       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:43:07.027018       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:43:07.027052       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:43:07.027060       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:43:07.027065       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:43:07.032885       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:43:07.033122       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-336923" podCIDRs=["10.42.0.0/24"]
	I1101 10:43:07.045614       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [7fe0e7aa668cdf7c54b9c84ae06c98e15421d9af81c904290fb9e8cd10ab8cfc] <==
	I1101 10:43:08.507307       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:43:08.584356       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:43:08.685099       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:43:08.685156       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:43:08.685288       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:43:08.705640       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:43:08.705692       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:43:08.710741       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:43:08.711079       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:43:08.711114       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:08.712414       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:43:08.712708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:43:08.712457       1 config.go:200] "Starting service config controller"
	I1101 10:43:08.712789       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:43:08.712479       1 config.go:309] "Starting node config controller"
	I1101 10:43:08.712808       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:43:08.712814       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:43:08.712543       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:43:08.712823       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:43:08.813796       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:43:08.813798       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:43:08.813805       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [95f451e526aba71c9e9d9314de0379f5d2a2ffda0fc779131a25dcf3762b2e8b] <==
	I1101 10:43:00.690315       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:00.692369       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:00.692413       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:00.692888       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:43:00.692982       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 10:43:00.694034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:43:00.694257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:43:00.695882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:43:00.695921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:43:00.695979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:43:00.696385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:43:00.696364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:43:00.696465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:43:00.696650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:43:00.696691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:43:00.696684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:43:00.696792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:43:00.696821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:43:00.696821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:43:00.696909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:43:00.696967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:43:00.697025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:43:00.697047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:43:00.697070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1101 10:43:01.893097       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: I1101 10:43:03.314299    1293 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: I1101 10:43:03.347668    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-336923"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: I1101 10:43:03.347783    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-336923"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: I1101 10:43:03.347886    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-336923"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: I1101 10:43:03.347988    1293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-336923"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: E1101 10:43:03.358819    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-336923\" already exists" pod="kube-system/etcd-newest-cni-336923"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: E1101 10:43:03.359685    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-336923\" already exists" pod="kube-system/kube-apiserver-newest-cni-336923"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: E1101 10:43:03.359725    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-336923\" already exists" pod="kube-system/kube-scheduler-newest-cni-336923"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: E1101 10:43:03.359754    1293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-336923\" already exists" pod="kube-system/kube-controller-manager-newest-cni-336923"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: I1101 10:43:03.387244    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-336923" podStartSLOduration=1.387220784 podStartE2EDuration="1.387220784s" podCreationTimestamp="2025-11-01 10:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:43:03.37647045 +0000 UTC m=+1.134358110" watchObservedRunningTime="2025-11-01 10:43:03.387220784 +0000 UTC m=+1.145108430"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: I1101 10:43:03.387450    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-336923" podStartSLOduration=1.38743666 podStartE2EDuration="1.38743666s" podCreationTimestamp="2025-11-01 10:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:43:03.387165539 +0000 UTC m=+1.145053197" watchObservedRunningTime="2025-11-01 10:43:03.38743666 +0000 UTC m=+1.145324319"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: I1101 10:43:03.398983    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-336923" podStartSLOduration=2.398965631 podStartE2EDuration="2.398965631s" podCreationTimestamp="2025-11-01 10:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:43:03.398962573 +0000 UTC m=+1.156850232" watchObservedRunningTime="2025-11-01 10:43:03.398965631 +0000 UTC m=+1.156853290"
	Nov 01 10:43:03 newest-cni-336923 kubelet[1293]: I1101 10:43:03.420010    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-336923" podStartSLOduration=1.419980588 podStartE2EDuration="1.419980588s" podCreationTimestamp="2025-11-01 10:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:43:03.409268153 +0000 UTC m=+1.167155812" watchObservedRunningTime="2025-11-01 10:43:03.419980588 +0000 UTC m=+1.177868251"
	Nov 01 10:43:07 newest-cni-336923 kubelet[1293]: I1101 10:43:07.054465    1293 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:43:07 newest-cni-336923 kubelet[1293]: I1101 10:43:07.055148    1293 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:43:08 newest-cni-336923 kubelet[1293]: I1101 10:43:08.151079    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e62d231c-e1d5-4e4a-81e1-0be9614e211d-lib-modules\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:08 newest-cni-336923 kubelet[1293]: I1101 10:43:08.151137    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnllt\" (UniqueName: \"kubernetes.io/projected/5a6496ad-eaf7-4f96-af7e-0dd5f88346c3-kube-api-access-fnllt\") pod \"kube-proxy-z65pd\" (UID: \"5a6496ad-eaf7-4f96-af7e-0dd5f88346c3\") " pod="kube-system/kube-proxy-z65pd"
	Nov 01 10:43:08 newest-cni-336923 kubelet[1293]: I1101 10:43:08.151167    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a6496ad-eaf7-4f96-af7e-0dd5f88346c3-kube-proxy\") pod \"kube-proxy-z65pd\" (UID: \"5a6496ad-eaf7-4f96-af7e-0dd5f88346c3\") " pod="kube-system/kube-proxy-z65pd"
	Nov 01 10:43:08 newest-cni-336923 kubelet[1293]: I1101 10:43:08.151192    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e62d231c-e1d5-4e4a-81e1-0be9614e211d-cni-cfg\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:08 newest-cni-336923 kubelet[1293]: I1101 10:43:08.151212    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a6496ad-eaf7-4f96-af7e-0dd5f88346c3-xtables-lock\") pod \"kube-proxy-z65pd\" (UID: \"5a6496ad-eaf7-4f96-af7e-0dd5f88346c3\") " pod="kube-system/kube-proxy-z65pd"
	Nov 01 10:43:08 newest-cni-336923 kubelet[1293]: I1101 10:43:08.151237    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a6496ad-eaf7-4f96-af7e-0dd5f88346c3-lib-modules\") pod \"kube-proxy-z65pd\" (UID: \"5a6496ad-eaf7-4f96-af7e-0dd5f88346c3\") " pod="kube-system/kube-proxy-z65pd"
	Nov 01 10:43:08 newest-cni-336923 kubelet[1293]: I1101 10:43:08.151273    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e62d231c-e1d5-4e4a-81e1-0be9614e211d-xtables-lock\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:08 newest-cni-336923 kubelet[1293]: I1101 10:43:08.151295    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zndzc\" (UniqueName: \"kubernetes.io/projected/e62d231c-e1d5-4e4a-81e1-0be9614e211d-kube-api-access-zndzc\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:09 newest-cni-336923 kubelet[1293]: I1101 10:43:09.382982    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6lbk4" podStartSLOduration=1.3829564890000001 podStartE2EDuration="1.382956489s" podCreationTimestamp="2025-11-01 10:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:43:09.382703781 +0000 UTC m=+7.140591441" watchObservedRunningTime="2025-11-01 10:43:09.382956489 +0000 UTC m=+7.140844148"
	Nov 01 10:43:09 newest-cni-336923 kubelet[1293]: I1101 10:43:09.383142    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z65pd" podStartSLOduration=1.383130304 podStartE2EDuration="1.383130304s" podCreationTimestamp="2025-11-01 10:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 10:43:09.373085055 +0000 UTC m=+7.130972715" watchObservedRunningTime="2025-11-01 10:43:09.383130304 +0000 UTC m=+7.141017964"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-336923 -n newest-cni-336923
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-336923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-j9pcl storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-336923 describe pod coredns-66bc5c9577-j9pcl storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-336923 describe pod coredns-66bc5c9577-j9pcl storage-provisioner: exit status 1 (64.950284ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-j9pcl" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-336923 describe pod coredns-66bc5c9577-j9pcl storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-071527 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-071527 --alsologtostderr -v=1: exit status 80 (2.257404462s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-071527 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:43:10.305442  381032 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:10.305602  381032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:10.305617  381032 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:10.305628  381032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:10.305915  381032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:43:10.306278  381032 out.go:368] Setting JSON to false
	I1101 10:43:10.306326  381032 mustload.go:66] Loading cluster: embed-certs-071527
	I1101 10:43:10.306825  381032 config.go:182] Loaded profile config "embed-certs-071527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:10.307372  381032 cli_runner.go:164] Run: docker container inspect embed-certs-071527 --format={{.State.Status}}
	I1101 10:43:10.328271  381032 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:43:10.328640  381032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:10.391355  381032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 10:43:10.381118346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:10.392086  381032 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-071527 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:43:10.393659  381032 out.go:179] * Pausing node embed-certs-071527 ... 
	I1101 10:43:10.395022  381032 host.go:66] Checking if "embed-certs-071527" exists ...
	I1101 10:43:10.395355  381032 ssh_runner.go:195] Run: systemctl --version
	I1101 10:43:10.395399  381032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-071527
	I1101 10:43:10.414264  381032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/embed-certs-071527/id_rsa Username:docker}
	I1101 10:43:10.517026  381032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:10.530413  381032 pause.go:52] kubelet running: true
	I1101 10:43:10.530479  381032 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:10.706736  381032 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:10.706840  381032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:10.775381  381032 cri.go:89] found id: "e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8"
	I1101 10:43:10.775413  381032 cri.go:89] found id: "109df5c1f202cc41b4dab35bbab63109f56af8ab4f5956ca5e594899e58d5315"
	I1101 10:43:10.775420  381032 cri.go:89] found id: "f3ba67113e6314dcb9c0efc27f90c60d26c7f48d641360b30290880a8ed70d00"
	I1101 10:43:10.775425  381032 cri.go:89] found id: "58af59e91290f11d91c4b295b1747d2701441b9dd29c32b69b4232b42c088e25"
	I1101 10:43:10.775429  381032 cri.go:89] found id: "80f31512805f16f08fa9705b27f3c1498892c2e94ef78e2ad2265ed098cdc17c"
	I1101 10:43:10.775435  381032 cri.go:89] found id: "e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44"
	I1101 10:43:10.775439  381032 cri.go:89] found id: "1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b"
	I1101 10:43:10.775443  381032 cri.go:89] found id: "cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f"
	I1101 10:43:10.775447  381032 cri.go:89] found id: "2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d"
	I1101 10:43:10.775467  381032 cri.go:89] found id: "73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e"
	I1101 10:43:10.775475  381032 cri.go:89] found id: "aaae0b0748585fd0a6c527a566037052ad872f863608497afa529bfd03c0c2e9"
	I1101 10:43:10.775480  381032 cri.go:89] found id: ""
	I1101 10:43:10.775544  381032 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:10.787374  381032 retry.go:31] will retry after 136.839106ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:10Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:43:10.924680  381032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:10.937880  381032 pause.go:52] kubelet running: false
	I1101 10:43:10.937947  381032 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:11.104601  381032 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:11.104716  381032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:11.178683  381032 cri.go:89] found id: "e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8"
	I1101 10:43:11.178708  381032 cri.go:89] found id: "109df5c1f202cc41b4dab35bbab63109f56af8ab4f5956ca5e594899e58d5315"
	I1101 10:43:11.178712  381032 cri.go:89] found id: "f3ba67113e6314dcb9c0efc27f90c60d26c7f48d641360b30290880a8ed70d00"
	I1101 10:43:11.178717  381032 cri.go:89] found id: "58af59e91290f11d91c4b295b1747d2701441b9dd29c32b69b4232b42c088e25"
	I1101 10:43:11.178721  381032 cri.go:89] found id: "80f31512805f16f08fa9705b27f3c1498892c2e94ef78e2ad2265ed098cdc17c"
	I1101 10:43:11.178726  381032 cri.go:89] found id: "e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44"
	I1101 10:43:11.178730  381032 cri.go:89] found id: "1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b"
	I1101 10:43:11.178735  381032 cri.go:89] found id: "cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f"
	I1101 10:43:11.178739  381032 cri.go:89] found id: "2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d"
	I1101 10:43:11.178754  381032 cri.go:89] found id: "73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e"
	I1101 10:43:11.178762  381032 cri.go:89] found id: "aaae0b0748585fd0a6c527a566037052ad872f863608497afa529bfd03c0c2e9"
	I1101 10:43:11.178766  381032 cri.go:89] found id: ""
	I1101 10:43:11.178811  381032 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:11.191426  381032 retry.go:31] will retry after 272.996584ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:11Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:43:11.464994  381032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:11.477645  381032 pause.go:52] kubelet running: false
	I1101 10:43:11.477709  381032 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:11.615368  381032 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:11.615462  381032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:11.678835  381032 cri.go:89] found id: "e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8"
	I1101 10:43:11.678855  381032 cri.go:89] found id: "109df5c1f202cc41b4dab35bbab63109f56af8ab4f5956ca5e594899e58d5315"
	I1101 10:43:11.678858  381032 cri.go:89] found id: "f3ba67113e6314dcb9c0efc27f90c60d26c7f48d641360b30290880a8ed70d00"
	I1101 10:43:11.678862  381032 cri.go:89] found id: "58af59e91290f11d91c4b295b1747d2701441b9dd29c32b69b4232b42c088e25"
	I1101 10:43:11.678864  381032 cri.go:89] found id: "80f31512805f16f08fa9705b27f3c1498892c2e94ef78e2ad2265ed098cdc17c"
	I1101 10:43:11.678867  381032 cri.go:89] found id: "e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44"
	I1101 10:43:11.678870  381032 cri.go:89] found id: "1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b"
	I1101 10:43:11.678872  381032 cri.go:89] found id: "cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f"
	I1101 10:43:11.678874  381032 cri.go:89] found id: "2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d"
	I1101 10:43:11.678880  381032 cri.go:89] found id: "73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e"
	I1101 10:43:11.678882  381032 cri.go:89] found id: "aaae0b0748585fd0a6c527a566037052ad872f863608497afa529bfd03c0c2e9"
	I1101 10:43:11.678884  381032 cri.go:89] found id: ""
	I1101 10:43:11.678919  381032 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:11.690510  381032 retry.go:31] will retry after 524.148756ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:11Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:43:12.215209  381032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:12.228158  381032 pause.go:52] kubelet running: false
	I1101 10:43:12.228219  381032 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:12.390968  381032 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:12.391048  381032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:12.464739  381032 cri.go:89] found id: "e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8"
	I1101 10:43:12.464768  381032 cri.go:89] found id: "109df5c1f202cc41b4dab35bbab63109f56af8ab4f5956ca5e594899e58d5315"
	I1101 10:43:12.464774  381032 cri.go:89] found id: "f3ba67113e6314dcb9c0efc27f90c60d26c7f48d641360b30290880a8ed70d00"
	I1101 10:43:12.464779  381032 cri.go:89] found id: "58af59e91290f11d91c4b295b1747d2701441b9dd29c32b69b4232b42c088e25"
	I1101 10:43:12.464783  381032 cri.go:89] found id: "80f31512805f16f08fa9705b27f3c1498892c2e94ef78e2ad2265ed098cdc17c"
	I1101 10:43:12.464788  381032 cri.go:89] found id: "e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44"
	I1101 10:43:12.464793  381032 cri.go:89] found id: "1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b"
	I1101 10:43:12.464797  381032 cri.go:89] found id: "cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f"
	I1101 10:43:12.464800  381032 cri.go:89] found id: "2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d"
	I1101 10:43:12.464809  381032 cri.go:89] found id: "73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e"
	I1101 10:43:12.464814  381032 cri.go:89] found id: "aaae0b0748585fd0a6c527a566037052ad872f863608497afa529bfd03c0c2e9"
	I1101 10:43:12.464817  381032 cri.go:89] found id: ""
	I1101 10:43:12.464859  381032 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:12.479816  381032 out.go:203] 
	W1101 10:43:12.481058  381032 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:43:12.481089  381032 out.go:285] * 
	* 
	W1101 10:43:12.486859  381032 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:43:12.488129  381032 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-071527 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-071527
helpers_test.go:243: (dbg) docker inspect embed-certs-071527:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347",
	        "Created": "2025-11-01T10:41:08.275582129Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 368694,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:42:12.701206434Z",
	            "FinishedAt": "2025-11-01T10:42:11.819720131Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/hostname",
	        "HostsPath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/hosts",
	        "LogPath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347-json.log",
	        "Name": "/embed-certs-071527",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-071527:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-071527",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347",
	                "LowerDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-071527",
	                "Source": "/var/lib/docker/volumes/embed-certs-071527/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-071527",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-071527",
	                "name.minikube.sigs.k8s.io": "embed-certs-071527",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e155bc5eb1537ff3aee668a6cd15b3827ac6986094dc652d02939b71f31e098e",
	            "SandboxKey": "/var/run/docker/netns/e155bc5eb153",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-071527": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:72:e0:7e:b2:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71c4a921c7722fad5b37063fce8060e68553067a07b69ccdd6ced39559bcf13c",
	                    "EndpointID": "ac777f41642b756d73c0db451ff29dcdccfcc56acc1157394c6417ad06c3357f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-071527",
	                        "e344e6e53c87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071527 -n embed-certs-071527
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071527 -n embed-certs-071527: exit status 2 (338.482078ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-071527 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-071527 logs -n 25: (1.069411471s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p no-preload-753486 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p embed-certs-071527 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p no-preload-753486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433711 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-433711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ image   │ embed-certs-071527 image list --format=json                                                                                                                                                                                                   │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p embed-certs-071527 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ stop    │ -p newest-cni-336923 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:43:09
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:43:09.433764  380170 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:09.434067  380170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:09.434078  380170 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:09.434083  380170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:09.434282  380170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:43:09.434759  380170 out.go:368] Setting JSON to false
	I1101 10:43:09.435884  380170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8729,"bootTime":1761985060,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:43:09.435980  380170 start.go:143] virtualization: kvm guest
	I1101 10:43:09.437888  380170 out.go:179] * [default-k8s-diff-port-433711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:43:09.438945  380170 notify.go:221] Checking for updates...
	I1101 10:43:09.438964  380170 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:43:09.440132  380170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:43:09.441564  380170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:09.442699  380170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:43:09.443774  380170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:43:09.444780  380170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:43:09.449351  380170 config.go:182] Loaded profile config "default-k8s-diff-port-433711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:09.449826  380170 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:43:09.475735  380170 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:43:09.475843  380170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:09.543642  380170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 10:43:09.533166887 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:09.543789  380170 docker.go:319] overlay module found
	I1101 10:43:09.546342  380170 out.go:179] * Using the docker driver based on existing profile
	I1101 10:43:09.547437  380170 start.go:309] selected driver: docker
	I1101 10:43:09.547455  380170 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-433711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-433711 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:09.547565  380170 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:43:09.548143  380170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:09.609326  380170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 10:43:09.599016767 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:09.609629  380170 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:43:09.609661  380170 cni.go:84] Creating CNI manager for ""
	I1101 10:43:09.609718  380170 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:09.609756  380170 start.go:353] cluster config:
	{Name:default-k8s-diff-port-433711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-433711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:09.611514  380170 out.go:179] * Starting "default-k8s-diff-port-433711" primary control-plane node in "default-k8s-diff-port-433711" cluster
	I1101 10:43:09.612576  380170 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:43:09.613707  380170 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:43:09.616372  380170 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:09.616422  380170 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:43:09.616435  380170 cache.go:59] Caching tarball of preloaded images
	I1101 10:43:09.616549  380170 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:43:09.616562  380170 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:43:09.616575  380170 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:43:09.616678  380170 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/config.json ...
	I1101 10:43:09.639184  380170 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:43:09.639203  380170 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:43:09.639225  380170 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:43:09.639259  380170 start.go:360] acquireMachinesLock for default-k8s-diff-port-433711: {Name:mkc4e931cb1d8b02006962c9c78cd1a237482980 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:43:09.639318  380170 start.go:364] duration metric: took 39.32µs to acquireMachinesLock for "default-k8s-diff-port-433711"
	I1101 10:43:09.639340  380170 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:43:09.639349  380170 fix.go:54] fixHost starting: 
	I1101 10:43:09.639653  380170 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:43:09.658086  380170 fix.go:112] recreateIfNeeded on default-k8s-diff-port-433711: state=Stopped err=<nil>
	W1101 10:43:09.658119  380170 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 01 10:42:43 embed-certs-071527 crio[568]: time="2025-11-01T10:42:43.560037177Z" level=info msg="Started container" PID=1775 containerID=af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper id=8326ba20-f74a-46e9-80b6-25cc7ec19efe name=/runtime.v1.RuntimeService/StartContainer sandboxID=a396426dae7710632e47498ddfde99d05cdf0ed0f61c5c6b45aa22a5efddb1e0
	Nov 01 10:42:44 embed-certs-071527 crio[568]: time="2025-11-01T10:42:44.342931113Z" level=info msg="Removing container: ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4" id=a4938873-9839-4498-b745-0adcd71cb5e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:44 embed-certs-071527 crio[568]: time="2025-11-01T10:42:44.355620477Z" level=info msg="Removed container ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper" id=a4938873-9839-4498-b745-0adcd71cb5e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.368871703Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ec3f4e22-0999-4d6e-afdf-618a71f58a3f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.369796553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8e20c3e7-213f-4d10-a0dc-806cef1b7ff0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.370966783Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a422a3eb-637a-4cb1-b606-c93ff878d4dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.371111191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.375364708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.375518673Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4adca50d6dd2810e6f7ab86f6a6efe8fd16d97e034b3188ad95830ab4267efff/merged/etc/passwd: no such file or directory"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.375547519Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4adca50d6dd2810e6f7ab86f6a6efe8fd16d97e034b3188ad95830ab4267efff/merged/etc/group: no such file or directory"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.375740808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.416853809Z" level=info msg="Created container e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8: kube-system/storage-provisioner/storage-provisioner" id=a422a3eb-637a-4cb1-b606-c93ff878d4dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.417484198Z" level=info msg="Starting container: e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8" id=47af4e46-7f36-4d66-b4c0-e65876ac74ca name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.419553372Z" level=info msg="Started container" PID=1789 containerID=e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8 description=kube-system/storage-provisioner/storage-provisioner id=47af4e46-7f36-4d66-b4c0-e65876ac74ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=05ef40fe00e380aeb473025596aa4451ff70fb100bd51ccf167e89c13e4cc953
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.236390173Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2f1fa8b4-68e2-4e49-9369-afe4ee655d74 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.237683719Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=589c820e-2484-4b24-bb85-911b56488195 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.239004173Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper" id=6dd8ed1e-8679-44d8-9887-85bd8caf14f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.239156903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.247981352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.2489248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.296244226Z" level=info msg="Created container 73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper" id=6dd8ed1e-8679-44d8-9887-85bd8caf14f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.297798622Z" level=info msg="Starting container: 73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e" id=9ed15c19-3366-4c35-83b7-676c86f7c291 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.300552838Z" level=info msg="Started container" PID=1825 containerID=73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper id=9ed15c19-3366-4c35-83b7-676c86f7c291 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a396426dae7710632e47498ddfde99d05cdf0ed0f61c5c6b45aa22a5efddb1e0
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.410282157Z" level=info msg="Removing container: af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a" id=f0d05c87-c665-4d84-8dbe-8f64e3fc0b10 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.42293737Z" level=info msg="Removed container af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper" id=f0d05c87-c665-4d84-8dbe-8f64e3fc0b10 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	73b1ea0737925       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   a396426dae771       dashboard-metrics-scraper-6ffb444bf9-6d555   kubernetes-dashboard
	e59b0c23f0acb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   05ef40fe00e38       storage-provisioner                          kube-system
	aaae0b0748585       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   3afaa92f37320       kubernetes-dashboard-855c9754f9-z9755        kubernetes-dashboard
	40c51be008ed6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   eb0e5ebea4531       busybox                                      default
	109df5c1f202c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   444eba0d10992       coredns-66bc5c9577-c5td8                     kube-system
	f3ba67113e631       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   8383740a8390e       kindnet-m4vzv                                kube-system
	58af59e91290f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   05ef40fe00e38       storage-provisioner                          kube-system
	80f31512805f1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   aac5a33f6f061       kube-proxy-l5pzc                             kube-system
	e95c5bdefe5ba       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   ff5fb35eccc18       kube-controller-manager-embed-certs-071527   kube-system
	1e1f2165fff91       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   ef18c88b2bded       kube-apiserver-embed-certs-071527            kube-system
	cdeac8cd5ed20       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   04e51bdad40ca       etcd-embed-certs-071527                      kube-system
	2c76e616b169e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   7bb96d12b5be9       kube-scheduler-embed-certs-071527            kube-system
	
	
	==> coredns [109df5c1f202cc41b4dab35bbab63109f56af8ab4f5956ca5e594899e58d5315] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44026 - 13106 "HINFO IN 2774434442936345585.5388974302648616388. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03251537s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-071527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-071527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=embed-certs-071527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-071527
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:43:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:42:52 +0000   Sat, 01 Nov 2025 10:41:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:42:52 +0000   Sat, 01 Nov 2025 10:41:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:42:52 +0000   Sat, 01 Nov 2025 10:41:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:42:52 +0000   Sat, 01 Nov 2025 10:41:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-071527
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0f044f7b-0834-4e21-aea6-e7dd72693606
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-c5td8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-071527                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-m4vzv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-071527             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-071527    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-l5pzc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-071527             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6d555    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z9755         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node embed-certs-071527 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node embed-certs-071527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node embed-certs-071527 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node embed-certs-071527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node embed-certs-071527 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node embed-certs-071527 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           106s                 node-controller  Node embed-certs-071527 event: Registered Node embed-certs-071527 in Controller
	  Normal  NodeReady                94s                  kubelet          Node embed-certs-071527 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node embed-certs-071527 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node embed-certs-071527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node embed-certs-071527 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node embed-certs-071527 event: Registered Node embed-certs-071527 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	[Nov 1 10:42] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f] <==
	{"level":"warn","ts":"2025-11-01T10:42:20.999068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.009178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.019461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.028380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.041064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.049737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.058928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.067792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.076840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.085919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.094678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.102806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.110560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.119003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.127103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.135521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.144238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.152192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.162706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.170728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.178844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.191710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.196185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.216058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.272198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38184","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:43:13 up  2:25,  0 user,  load average: 4.36, 3.92, 2.58
	Linux embed-certs-071527 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3ba67113e6314dcb9c0efc27f90c60d26c7f48d641360b30290880a8ed70d00] <==
	I1101 10:42:22.836424       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:42:22.838441       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 10:42:22.838648       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:42:22.838669       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:42:22.838694       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:42:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:42:23.043460       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:42:23.136291       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:42:23.136406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:42:23.136640       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:42:23.537317       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:42:23.537435       1 metrics.go:72] Registering metrics
	I1101 10:42:23.537567       1 controller.go:711] "Syncing nftables rules"
	I1101 10:42:33.043341       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:42:33.043386       1 main.go:301] handling current node
	I1101 10:42:43.048594       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:42:43.048631       1 main.go:301] handling current node
	I1101 10:42:53.043725       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:42:53.043760       1 main.go:301] handling current node
	I1101 10:43:03.043605       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:43:03.043666       1 main.go:301] handling current node
	I1101 10:43:13.045982       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:43:13.046020       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b] <==
	I1101 10:42:21.880209       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:42:21.880215       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:42:21.880222       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:42:21.880236       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:42:21.880304       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:42:21.880189       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:42:21.882288       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:42:21.882343       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1101 10:42:21.889118       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:42:21.896782       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:42:21.916463       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:42:21.929721       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:42:21.929754       1 policy_source.go:240] refreshing policies
	I1101 10:42:21.930395       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:42:22.239230       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:42:22.263326       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:42:22.295526       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:42:22.312608       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:42:22.320184       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:42:22.360027       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.102.68"}
	I1101 10:42:22.368932       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.59.90"}
	I1101 10:42:22.784208       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:42:25.238693       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:42:25.787455       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:42:25.836449       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44] <==
	I1101 10:42:25.225578       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:42:25.227758       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:42:25.233163       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:42:25.234297       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:42:25.234312       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:42:25.234342       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:42:25.234415       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:42:25.234567       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:42:25.234673       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-071527"
	I1101 10:42:25.234721       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:42:25.234716       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:42:25.235192       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:42:25.236396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:42:25.236507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:42:25.236584       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:42:25.237856       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:42:25.237883       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:42:25.240480       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:42:25.240596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:42:25.241893       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:42:25.244943       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:42:25.247239       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:42:25.248840       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:42:25.251305       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:42:25.259850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [80f31512805f16f08fa9705b27f3c1498892c2e94ef78e2ad2265ed098cdc17c] <==
	I1101 10:42:22.667793       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:42:22.741122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:42:22.842181       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:42:22.842639       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 10:42:22.843020       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:42:22.876466       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:42:22.876584       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:42:22.883938       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:42:22.887282       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:42:22.887511       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:22.889968       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:42:22.890033       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:42:22.890055       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:42:22.890065       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:42:22.890098       1 config.go:309] "Starting node config controller"
	I1101 10:42:22.890103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:42:22.890109       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:42:22.890491       1 config.go:200] "Starting service config controller"
	I1101 10:42:22.890524       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:42:22.990546       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:42:22.990606       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:42:22.991097       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d] <==
	I1101 10:42:20.775433       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:42:21.840629       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:42:21.840693       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:42:21.840708       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:42:21.840717       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:42:21.891308       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:42:21.891361       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:21.895158       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:21.895203       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:21.895336       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:42:21.895412       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:42:21.995552       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:42:25 embed-certs-071527 kubelet[731]: I1101 10:42:25.757153     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7e2b6485-a045-4bb5-b6b6-13a061e8e2c2-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-6d555\" (UID: \"7e2b6485-a045-4bb5-b6b6-13a061e8e2c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555"
	Nov 01 10:42:25 embed-certs-071527 kubelet[731]: I1101 10:42:25.757183     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6g6p\" (UniqueName: \"kubernetes.io/projected/7e2b6485-a045-4bb5-b6b6-13a061e8e2c2-kube-api-access-s6g6p\") pod \"dashboard-metrics-scraper-6ffb444bf9-6d555\" (UID: \"7e2b6485-a045-4bb5-b6b6-13a061e8e2c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555"
	Nov 01 10:42:26 embed-certs-071527 kubelet[731]: I1101 10:42:26.929254     731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:42:33 embed-certs-071527 kubelet[731]: I1101 10:42:33.307144     731 scope.go:117] "RemoveContainer" containerID="60060b96ad3ecffc4b0aa1f0881f4d0d875b6c825f8d99971c3ab52b042670c5"
	Nov 01 10:42:33 embed-certs-071527 kubelet[731]: I1101 10:42:33.317482     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z9755" podStartSLOduration=4.470201082 podStartE2EDuration="8.317460373s" podCreationTimestamp="2025-11-01 10:42:25 +0000 UTC" firstStartedPulling="2025-11-01 10:42:26.036941451 +0000 UTC m=+6.911936492" lastFinishedPulling="2025-11-01 10:42:29.88420075 +0000 UTC m=+10.759195783" observedRunningTime="2025-11-01 10:42:30.310735756 +0000 UTC m=+11.185730800" watchObservedRunningTime="2025-11-01 10:42:33.317460373 +0000 UTC m=+14.192455422"
	Nov 01 10:42:34 embed-certs-071527 kubelet[731]: I1101 10:42:34.311566     731 scope.go:117] "RemoveContainer" containerID="60060b96ad3ecffc4b0aa1f0881f4d0d875b6c825f8d99971c3ab52b042670c5"
	Nov 01 10:42:34 embed-certs-071527 kubelet[731]: I1101 10:42:34.311674     731 scope.go:117] "RemoveContainer" containerID="ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4"
	Nov 01 10:42:34 embed-certs-071527 kubelet[731]: E1101 10:42:34.311840     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:42:35 embed-certs-071527 kubelet[731]: I1101 10:42:35.315475     731 scope.go:117] "RemoveContainer" containerID="ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4"
	Nov 01 10:42:35 embed-certs-071527 kubelet[731]: E1101 10:42:35.315664     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:42:43 embed-certs-071527 kubelet[731]: I1101 10:42:43.482669     731 scope.go:117] "RemoveContainer" containerID="ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4"
	Nov 01 10:42:44 embed-certs-071527 kubelet[731]: I1101 10:42:44.341641     731 scope.go:117] "RemoveContainer" containerID="ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4"
	Nov 01 10:42:44 embed-certs-071527 kubelet[731]: I1101 10:42:44.341873     731 scope.go:117] "RemoveContainer" containerID="af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a"
	Nov 01 10:42:44 embed-certs-071527 kubelet[731]: E1101 10:42:44.342078     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:42:53 embed-certs-071527 kubelet[731]: I1101 10:42:53.368425     731 scope.go:117] "RemoveContainer" containerID="58af59e91290f11d91c4b295b1747d2701441b9dd29c32b69b4232b42c088e25"
	Nov 01 10:42:53 embed-certs-071527 kubelet[731]: I1101 10:42:53.483384     731 scope.go:117] "RemoveContainer" containerID="af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a"
	Nov 01 10:42:53 embed-certs-071527 kubelet[731]: E1101 10:42:53.483627     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:43:08 embed-certs-071527 kubelet[731]: I1101 10:43:08.235233     731 scope.go:117] "RemoveContainer" containerID="af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a"
	Nov 01 10:43:08 embed-certs-071527 kubelet[731]: I1101 10:43:08.408168     731 scope.go:117] "RemoveContainer" containerID="af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a"
	Nov 01 10:43:08 embed-certs-071527 kubelet[731]: I1101 10:43:08.408427     731 scope.go:117] "RemoveContainer" containerID="73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e"
	Nov 01 10:43:08 embed-certs-071527 kubelet[731]: E1101 10:43:08.408679     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:43:10 embed-certs-071527 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:43:10 embed-certs-071527 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:43:10 embed-certs-071527 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:43:10 embed-certs-071527 systemd[1]: kubelet.service: Consumed 1.720s CPU time.
	
	
	==> kubernetes-dashboard [aaae0b0748585fd0a6c527a566037052ad872f863608497afa529bfd03c0c2e9] <==
	2025/11/01 10:42:29 Using namespace: kubernetes-dashboard
	2025/11/01 10:42:29 Using in-cluster config to connect to apiserver
	2025/11/01 10:42:29 Using secret token for csrf signing
	2025/11/01 10:42:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:42:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:42:29 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:42:29 Generating JWE encryption key
	2025/11/01 10:42:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:42:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:42:30 Initializing JWE encryption key from synchronized object
	2025/11/01 10:42:30 Creating in-cluster Sidecar client
	2025/11/01 10:42:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:42:30 Serving insecurely on HTTP port: 9090
	2025/11/01 10:43:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:42:29 Starting overwatch
	
	
	==> storage-provisioner [58af59e91290f11d91c4b295b1747d2701441b9dd29c32b69b4232b42c088e25] <==
	I1101 10:42:22.605128       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:42:52.607856       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8] <==
	I1101 10:42:53.434153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:42:53.442181       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:42:53.442229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:42:53.444055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:56.898475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:01.159953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:04.757962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:07.811723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:10.833680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:10.837969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:43:10.838184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:43:10.838270       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69b6d152-a957-4062-98ba-dd505cbb377c", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-071527_a302cde5-9b60-4bf7-a701-b911a4990481 became leader
	I1101 10:43:10.838352       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-071527_a302cde5-9b60-4bf7-a701-b911a4990481!
	W1101 10:43:10.840124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:10.843828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:43:10.938616       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-071527_a302cde5-9b60-4bf7-a701-b911a4990481!
	W1101 10:43:12.846754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:12.851659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-071527 -n embed-certs-071527
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-071527 -n embed-certs-071527: exit status 2 (334.792197ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-071527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-071527
helpers_test.go:243: (dbg) docker inspect embed-certs-071527:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347",
	        "Created": "2025-11-01T10:41:08.275582129Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 368694,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:42:12.701206434Z",
	            "FinishedAt": "2025-11-01T10:42:11.819720131Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/hostname",
	        "HostsPath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/hosts",
	        "LogPath": "/var/lib/docker/containers/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347/e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347-json.log",
	        "Name": "/embed-certs-071527",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-071527:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-071527",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e344e6e53c87412a6329ab29e54642830755c2871d47a7dc87a3166eff912347",
	                "LowerDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/146b06fdda976081efe039707e775d2e04bce53111fa9d4362cbe09e9c2d71d1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-071527",
	                "Source": "/var/lib/docker/volumes/embed-certs-071527/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-071527",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-071527",
	                "name.minikube.sigs.k8s.io": "embed-certs-071527",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e155bc5eb1537ff3aee668a6cd15b3827ac6986094dc652d02939b71f31e098e",
	            "SandboxKey": "/var/run/docker/netns/e155bc5eb153",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-071527": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:72:e0:7e:b2:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "71c4a921c7722fad5b37063fce8060e68553067a07b69ccdd6ced39559bcf13c",
	                    "EndpointID": "ac777f41642b756d73c0db451ff29dcdccfcc56acc1157394c6417ad06c3357f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-071527",
	                        "e344e6e53c87"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071527 -n embed-certs-071527
E1101 10:43:14.414625   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071527 -n embed-certs-071527: exit status 2 (332.211876ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-071527 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-071527 logs -n 25: (1.10104035s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-753486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p no-preload-753486 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ addons  │ enable metrics-server -p embed-certs-071527 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │                     │
	│ stop    │ -p embed-certs-071527 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p no-preload-753486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:41 UTC │
	│ start   │ -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:41 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433711 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-433711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ image   │ embed-certs-071527 image list --format=json                                                                                                                                                                                                   │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p embed-certs-071527 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ stop    │ -p newest-cni-336923 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:43:09
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:43:09.433764  380170 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:09.434067  380170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:09.434078  380170 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:09.434083  380170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:09.434282  380170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:43:09.434759  380170 out.go:368] Setting JSON to false
	I1101 10:43:09.435884  380170 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8729,"bootTime":1761985060,"procs":274,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:43:09.435980  380170 start.go:143] virtualization: kvm guest
	I1101 10:43:09.437888  380170 out.go:179] * [default-k8s-diff-port-433711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:43:09.438945  380170 notify.go:221] Checking for updates...
	I1101 10:43:09.438964  380170 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:43:09.440132  380170 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:43:09.441564  380170 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:09.442699  380170 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:43:09.443774  380170 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:43:09.444780  380170 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:43:09.449351  380170 config.go:182] Loaded profile config "default-k8s-diff-port-433711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:09.449826  380170 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:43:09.475735  380170 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:43:09.475843  380170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:09.543642  380170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 10:43:09.533166887 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:09.543789  380170 docker.go:319] overlay module found
	I1101 10:43:09.546342  380170 out.go:179] * Using the docker driver based on existing profile
	I1101 10:43:09.547437  380170 start.go:309] selected driver: docker
	I1101 10:43:09.547455  380170 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-433711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-433711 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:09.547565  380170 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:43:09.548143  380170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:09.609326  380170 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 10:43:09.599016767 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:09.609629  380170 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:43:09.609661  380170 cni.go:84] Creating CNI manager for ""
	I1101 10:43:09.609718  380170 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:09.609756  380170 start.go:353] cluster config:
	{Name:default-k8s-diff-port-433711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-433711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:09.611514  380170 out.go:179] * Starting "default-k8s-diff-port-433711" primary control-plane node in "default-k8s-diff-port-433711" cluster
	I1101 10:43:09.612576  380170 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:43:09.613707  380170 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:43:09.616372  380170 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:09.616422  380170 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:43:09.616435  380170 cache.go:59] Caching tarball of preloaded images
	I1101 10:43:09.616549  380170 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:43:09.616562  380170 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:43:09.616575  380170 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:43:09.616678  380170 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/config.json ...
	I1101 10:43:09.639184  380170 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:43:09.639203  380170 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:43:09.639225  380170 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:43:09.639259  380170 start.go:360] acquireMachinesLock for default-k8s-diff-port-433711: {Name:mkc4e931cb1d8b02006962c9c78cd1a237482980 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:43:09.639318  380170 start.go:364] duration metric: took 39.32µs to acquireMachinesLock for "default-k8s-diff-port-433711"
	I1101 10:43:09.639340  380170 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:43:09.639349  380170 fix.go:54] fixHost starting: 
	I1101 10:43:09.639653  380170 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:43:09.658086  380170 fix.go:112] recreateIfNeeded on default-k8s-diff-port-433711: state=Stopped err=<nil>
	W1101 10:43:09.658119  380170 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:43:09.659401  380170 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-433711" ...
	I1101 10:43:09.659472  380170 cli_runner.go:164] Run: docker start default-k8s-diff-port-433711
	I1101 10:43:09.923176  380170 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:43:09.947797  380170 kic.go:430] container "default-k8s-diff-port-433711" state is running.
	I1101 10:43:09.948310  380170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-433711
	I1101 10:43:09.972253  380170 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/default-k8s-diff-port-433711/config.json ...
	I1101 10:43:09.972486  380170 machine.go:94] provisionDockerMachine start ...
	I1101 10:43:09.972579  380170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:43:09.990900  380170 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:09.991205  380170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 10:43:09.991222  380170 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:43:09.992007  380170 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40464->127.0.0.1:33128: read: connection reset by peer
	I1101 10:43:13.137994  380170 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-433711
	
	I1101 10:43:13.138025  380170 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-433711"
	I1101 10:43:13.138079  380170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:43:13.157636  380170 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:13.157905  380170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 10:43:13.157933  380170 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-433711 && echo "default-k8s-diff-port-433711" | sudo tee /etc/hostname
	I1101 10:43:13.315165  380170 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-433711
	
	I1101 10:43:13.315263  380170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:43:13.335358  380170 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:13.335651  380170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 10:43:13.335677  380170 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-433711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-433711/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-433711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:43:13.480678  380170 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:43:13.480711  380170 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:43:13.480763  380170 ubuntu.go:190] setting up certificates
	I1101 10:43:13.480782  380170 provision.go:84] configureAuth start
	I1101 10:43:13.480852  380170 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-433711
	I1101 10:43:13.499375  380170 provision.go:143] copyHostCerts
	I1101 10:43:13.499440  380170 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:43:13.499453  380170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:43:13.499553  380170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:43:13.499677  380170 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:43:13.499691  380170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:43:13.499733  380170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:43:13.499829  380170 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:43:13.499840  380170 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:43:13.499873  380170 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:43:13.499977  380170 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-433711 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-433711 localhost minikube]
	I1101 10:43:14.124634  380170 provision.go:177] copyRemoteCerts
	I1101 10:43:14.124701  380170 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:43:14.124741  380170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:43:14.142340  380170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa Username:docker}
	I1101 10:43:14.244694  380170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:43:14.263671  380170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 10:43:14.282569  380170 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:43:14.300236  380170 provision.go:87] duration metric: took 819.438925ms to configureAuth
	I1101 10:43:14.300264  380170 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:43:14.300435  380170 config.go:182] Loaded profile config "default-k8s-diff-port-433711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:14.300551  380170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:43:14.320448  380170 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:14.320740  380170 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 10:43:14.320765  380170 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 01 10:42:43 embed-certs-071527 crio[568]: time="2025-11-01T10:42:43.560037177Z" level=info msg="Started container" PID=1775 containerID=af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper id=8326ba20-f74a-46e9-80b6-25cc7ec19efe name=/runtime.v1.RuntimeService/StartContainer sandboxID=a396426dae7710632e47498ddfde99d05cdf0ed0f61c5c6b45aa22a5efddb1e0
	Nov 01 10:42:44 embed-certs-071527 crio[568]: time="2025-11-01T10:42:44.342931113Z" level=info msg="Removing container: ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4" id=a4938873-9839-4498-b745-0adcd71cb5e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:44 embed-certs-071527 crio[568]: time="2025-11-01T10:42:44.355620477Z" level=info msg="Removed container ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper" id=a4938873-9839-4498-b745-0adcd71cb5e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.368871703Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ec3f4e22-0999-4d6e-afdf-618a71f58a3f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.369796553Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8e20c3e7-213f-4d10-a0dc-806cef1b7ff0 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.370966783Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a422a3eb-637a-4cb1-b606-c93ff878d4dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.371111191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.375364708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.375518673Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4adca50d6dd2810e6f7ab86f6a6efe8fd16d97e034b3188ad95830ab4267efff/merged/etc/passwd: no such file or directory"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.375547519Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4adca50d6dd2810e6f7ab86f6a6efe8fd16d97e034b3188ad95830ab4267efff/merged/etc/group: no such file or directory"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.375740808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.416853809Z" level=info msg="Created container e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8: kube-system/storage-provisioner/storage-provisioner" id=a422a3eb-637a-4cb1-b606-c93ff878d4dd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.417484198Z" level=info msg="Starting container: e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8" id=47af4e46-7f36-4d66-b4c0-e65876ac74ca name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:42:53 embed-certs-071527 crio[568]: time="2025-11-01T10:42:53.419553372Z" level=info msg="Started container" PID=1789 containerID=e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8 description=kube-system/storage-provisioner/storage-provisioner id=47af4e46-7f36-4d66-b4c0-e65876ac74ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=05ef40fe00e380aeb473025596aa4451ff70fb100bd51ccf167e89c13e4cc953
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.236390173Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2f1fa8b4-68e2-4e49-9369-afe4ee655d74 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.237683719Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=589c820e-2484-4b24-bb85-911b56488195 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.239004173Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper" id=6dd8ed1e-8679-44d8-9887-85bd8caf14f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.239156903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.247981352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.2489248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.296244226Z" level=info msg="Created container 73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper" id=6dd8ed1e-8679-44d8-9887-85bd8caf14f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.297798622Z" level=info msg="Starting container: 73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e" id=9ed15c19-3366-4c35-83b7-676c86f7c291 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.300552838Z" level=info msg="Started container" PID=1825 containerID=73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper id=9ed15c19-3366-4c35-83b7-676c86f7c291 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a396426dae7710632e47498ddfde99d05cdf0ed0f61c5c6b45aa22a5efddb1e0
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.410282157Z" level=info msg="Removing container: af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a" id=f0d05c87-c665-4d84-8dbe-8f64e3fc0b10 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:43:08 embed-certs-071527 crio[568]: time="2025-11-01T10:43:08.42293737Z" level=info msg="Removed container af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555/dashboard-metrics-scraper" id=f0d05c87-c665-4d84-8dbe-8f64e3fc0b10 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	73b1ea0737925       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   a396426dae771       dashboard-metrics-scraper-6ffb444bf9-6d555   kubernetes-dashboard
	e59b0c23f0acb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   05ef40fe00e38       storage-provisioner                          kube-system
	aaae0b0748585       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   3afaa92f37320       kubernetes-dashboard-855c9754f9-z9755        kubernetes-dashboard
	40c51be008ed6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   eb0e5ebea4531       busybox                                      default
	109df5c1f202c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   444eba0d10992       coredns-66bc5c9577-c5td8                     kube-system
	f3ba67113e631       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   8383740a8390e       kindnet-m4vzv                                kube-system
	58af59e91290f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   05ef40fe00e38       storage-provisioner                          kube-system
	80f31512805f1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   aac5a33f6f061       kube-proxy-l5pzc                             kube-system
	e95c5bdefe5ba       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   ff5fb35eccc18       kube-controller-manager-embed-certs-071527   kube-system
	1e1f2165fff91       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   ef18c88b2bded       kube-apiserver-embed-certs-071527            kube-system
	cdeac8cd5ed20       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   04e51bdad40ca       etcd-embed-certs-071527                      kube-system
	2c76e616b169e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   7bb96d12b5be9       kube-scheduler-embed-certs-071527            kube-system
	
	
	==> coredns [109df5c1f202cc41b4dab35bbab63109f56af8ab4f5956ca5e594899e58d5315] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44026 - 13106 "HINFO IN 2774434442936345585.5388974302648616388. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03251537s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-071527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-071527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=embed-certs-071527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-071527
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:43:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:42:52 +0000   Sat, 01 Nov 2025 10:41:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:42:52 +0000   Sat, 01 Nov 2025 10:41:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:42:52 +0000   Sat, 01 Nov 2025 10:41:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:42:52 +0000   Sat, 01 Nov 2025 10:41:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-071527
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0f044f7b-0834-4e21-aea6-e7dd72693606
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-c5td8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-071527                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-m4vzv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-071527             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-071527    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-l5pzc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-071527             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-6d555    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-z9755         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node embed-certs-071527 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node embed-certs-071527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)  kubelet          Node embed-certs-071527 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node embed-certs-071527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node embed-certs-071527 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node embed-certs-071527 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           108s                 node-controller  Node embed-certs-071527 event: Registered Node embed-certs-071527 in Controller
	  Normal  NodeReady                96s                  kubelet          Node embed-certs-071527 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node embed-certs-071527 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node embed-certs-071527 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node embed-certs-071527 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node embed-certs-071527 event: Registered Node embed-certs-071527 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	[Nov 1 10:42] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [cdeac8cd5ed20ed69f2cae85240af0e1ad8eda39a544a107fdc467d0259e681f] <==
	{"level":"warn","ts":"2025-11-01T10:42:20.999068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.009178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.019461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.028380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.041064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.049737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.058928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.067792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.076840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.085919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.094678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.102806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.110560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.119003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.127103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.135521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.144238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.152192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.162706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.170728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.178844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.191710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.196185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.216058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:42:21.272198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38184","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:43:15 up  2:25,  0 user,  load average: 4.36, 3.92, 2.58
	Linux embed-certs-071527 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3ba67113e6314dcb9c0efc27f90c60d26c7f48d641360b30290880a8ed70d00] <==
	I1101 10:42:22.836424       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:42:22.838441       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 10:42:22.838648       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:42:22.838669       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:42:22.838694       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:42:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:42:23.043460       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:42:23.136291       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:42:23.136406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:42:23.136640       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:42:23.537317       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:42:23.537435       1 metrics.go:72] Registering metrics
	I1101 10:42:23.537567       1 controller.go:711] "Syncing nftables rules"
	I1101 10:42:33.043341       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:42:33.043386       1 main.go:301] handling current node
	I1101 10:42:43.048594       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:42:43.048631       1 main.go:301] handling current node
	I1101 10:42:53.043725       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:42:53.043760       1 main.go:301] handling current node
	I1101 10:43:03.043605       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:43:03.043666       1 main.go:301] handling current node
	I1101 10:43:13.045982       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 10:43:13.046020       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1e1f2165fff912b94ead346d574a39dc51a0e07c82ecfc46cf2218274dc3846b] <==
	I1101 10:42:21.880209       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:42:21.880215       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:42:21.880222       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:42:21.880236       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:42:21.880304       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:42:21.880189       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:42:21.882288       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:42:21.882343       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1101 10:42:21.889118       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:42:21.896782       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:42:21.916463       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:42:21.929721       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:42:21.929754       1 policy_source.go:240] refreshing policies
	I1101 10:42:21.930395       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:42:22.239230       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:42:22.263326       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:42:22.295526       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:42:22.312608       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:42:22.320184       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:42:22.360027       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.102.68"}
	I1101 10:42:22.368932       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.59.90"}
	I1101 10:42:22.784208       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:42:25.238693       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:42:25.787455       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:42:25.836449       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e95c5bdefe5bab954d844595226fa1bc71903693fcc281f98c8ca4acd6ebaf44] <==
	I1101 10:42:25.225578       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:42:25.227758       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:42:25.233163       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:42:25.234297       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:42:25.234312       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:42:25.234342       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:42:25.234415       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:42:25.234567       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:42:25.234673       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-071527"
	I1101 10:42:25.234721       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:42:25.234716       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:42:25.235192       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:42:25.236396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:42:25.236507       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:42:25.236584       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:42:25.237856       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:42:25.237883       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:42:25.240480       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:42:25.240596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:42:25.241893       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:42:25.244943       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:42:25.247239       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:42:25.248840       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:42:25.251305       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:42:25.259850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [80f31512805f16f08fa9705b27f3c1498892c2e94ef78e2ad2265ed098cdc17c] <==
	I1101 10:42:22.667793       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:42:22.741122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:42:22.842181       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:42:22.842639       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 10:42:22.843020       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:42:22.876466       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:42:22.876584       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:42:22.883938       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:42:22.887282       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:42:22.887511       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:22.889968       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:42:22.890033       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:42:22.890055       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:42:22.890065       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:42:22.890098       1 config.go:309] "Starting node config controller"
	I1101 10:42:22.890103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:42:22.890109       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:42:22.890491       1 config.go:200] "Starting service config controller"
	I1101 10:42:22.890524       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:42:22.990546       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:42:22.990606       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:42:22.991097       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c76e616b169eed9eccc0cbbe049577478d27b125b73db1838da83e15bac755d] <==
	I1101 10:42:20.775433       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:42:21.840629       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:42:21.840693       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:42:21.840708       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:42:21.840717       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:42:21.891308       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:42:21.891361       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:42:21.895158       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:21.895203       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:42:21.895336       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:42:21.895412       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:42:21.995552       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:42:25 embed-certs-071527 kubelet[731]: I1101 10:42:25.757153     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7e2b6485-a045-4bb5-b6b6-13a061e8e2c2-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-6d555\" (UID: \"7e2b6485-a045-4bb5-b6b6-13a061e8e2c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555"
	Nov 01 10:42:25 embed-certs-071527 kubelet[731]: I1101 10:42:25.757183     731 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6g6p\" (UniqueName: \"kubernetes.io/projected/7e2b6485-a045-4bb5-b6b6-13a061e8e2c2-kube-api-access-s6g6p\") pod \"dashboard-metrics-scraper-6ffb444bf9-6d555\" (UID: \"7e2b6485-a045-4bb5-b6b6-13a061e8e2c2\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555"
	Nov 01 10:42:26 embed-certs-071527 kubelet[731]: I1101 10:42:26.929254     731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:42:33 embed-certs-071527 kubelet[731]: I1101 10:42:33.307144     731 scope.go:117] "RemoveContainer" containerID="60060b96ad3ecffc4b0aa1f0881f4d0d875b6c825f8d99971c3ab52b042670c5"
	Nov 01 10:42:33 embed-certs-071527 kubelet[731]: I1101 10:42:33.317482     731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-z9755" podStartSLOduration=4.470201082 podStartE2EDuration="8.317460373s" podCreationTimestamp="2025-11-01 10:42:25 +0000 UTC" firstStartedPulling="2025-11-01 10:42:26.036941451 +0000 UTC m=+6.911936492" lastFinishedPulling="2025-11-01 10:42:29.88420075 +0000 UTC m=+10.759195783" observedRunningTime="2025-11-01 10:42:30.310735756 +0000 UTC m=+11.185730800" watchObservedRunningTime="2025-11-01 10:42:33.317460373 +0000 UTC m=+14.192455422"
	Nov 01 10:42:34 embed-certs-071527 kubelet[731]: I1101 10:42:34.311566     731 scope.go:117] "RemoveContainer" containerID="60060b96ad3ecffc4b0aa1f0881f4d0d875b6c825f8d99971c3ab52b042670c5"
	Nov 01 10:42:34 embed-certs-071527 kubelet[731]: I1101 10:42:34.311674     731 scope.go:117] "RemoveContainer" containerID="ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4"
	Nov 01 10:42:34 embed-certs-071527 kubelet[731]: E1101 10:42:34.311840     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:42:35 embed-certs-071527 kubelet[731]: I1101 10:42:35.315475     731 scope.go:117] "RemoveContainer" containerID="ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4"
	Nov 01 10:42:35 embed-certs-071527 kubelet[731]: E1101 10:42:35.315664     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:42:43 embed-certs-071527 kubelet[731]: I1101 10:42:43.482669     731 scope.go:117] "RemoveContainer" containerID="ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4"
	Nov 01 10:42:44 embed-certs-071527 kubelet[731]: I1101 10:42:44.341641     731 scope.go:117] "RemoveContainer" containerID="ebc6c8c24337b000be493626f551b034cb138fc3dd059db6f9cc8668e81b55d4"
	Nov 01 10:42:44 embed-certs-071527 kubelet[731]: I1101 10:42:44.341873     731 scope.go:117] "RemoveContainer" containerID="af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a"
	Nov 01 10:42:44 embed-certs-071527 kubelet[731]: E1101 10:42:44.342078     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:42:53 embed-certs-071527 kubelet[731]: I1101 10:42:53.368425     731 scope.go:117] "RemoveContainer" containerID="58af59e91290f11d91c4b295b1747d2701441b9dd29c32b69b4232b42c088e25"
	Nov 01 10:42:53 embed-certs-071527 kubelet[731]: I1101 10:42:53.483384     731 scope.go:117] "RemoveContainer" containerID="af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a"
	Nov 01 10:42:53 embed-certs-071527 kubelet[731]: E1101 10:42:53.483627     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:43:08 embed-certs-071527 kubelet[731]: I1101 10:43:08.235233     731 scope.go:117] "RemoveContainer" containerID="af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a"
	Nov 01 10:43:08 embed-certs-071527 kubelet[731]: I1101 10:43:08.408168     731 scope.go:117] "RemoveContainer" containerID="af16e8711c49d542fc8cd1d9a396138787681b76ab6372ded6e5423750a36e4a"
	Nov 01 10:43:08 embed-certs-071527 kubelet[731]: I1101 10:43:08.408427     731 scope.go:117] "RemoveContainer" containerID="73b1ea07379255faee299a2f05ba98602b4c9b03bf2b2d42ba0cb18ee1d4811e"
	Nov 01 10:43:08 embed-certs-071527 kubelet[731]: E1101 10:43:08.408679     731 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-6d555_kubernetes-dashboard(7e2b6485-a045-4bb5-b6b6-13a061e8e2c2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-6d555" podUID="7e2b6485-a045-4bb5-b6b6-13a061e8e2c2"
	Nov 01 10:43:10 embed-certs-071527 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:43:10 embed-certs-071527 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:43:10 embed-certs-071527 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:43:10 embed-certs-071527 systemd[1]: kubelet.service: Consumed 1.720s CPU time.
	
	
	==> kubernetes-dashboard [aaae0b0748585fd0a6c527a566037052ad872f863608497afa529bfd03c0c2e9] <==
	2025/11/01 10:42:29 Using namespace: kubernetes-dashboard
	2025/11/01 10:42:29 Using in-cluster config to connect to apiserver
	2025/11/01 10:42:29 Using secret token for csrf signing
	2025/11/01 10:42:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:42:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:42:29 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:42:29 Generating JWE encryption key
	2025/11/01 10:42:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:42:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:42:30 Initializing JWE encryption key from synchronized object
	2025/11/01 10:42:30 Creating in-cluster Sidecar client
	2025/11/01 10:42:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:42:30 Serving insecurely on HTTP port: 9090
	2025/11/01 10:43:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:42:29 Starting overwatch
	
	
	==> storage-provisioner [58af59e91290f11d91c4b295b1747d2701441b9dd29c32b69b4232b42c088e25] <==
	I1101 10:42:22.605128       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:42:52.607856       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e59b0c23f0acb4271150df2ed931effa3a18da97816d04337f00cb9f9f51c2a8] <==
	I1101 10:42:53.434153       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:42:53.442181       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:42:53.442229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:42:53.444055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:42:56.898475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:01.159953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:04.757962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:07.811723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:10.833680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:10.837969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:43:10.838184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:43:10.838270       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"69b6d152-a957-4062-98ba-dd505cbb377c", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-071527_a302cde5-9b60-4bf7-a701-b911a4990481 became leader
	I1101 10:43:10.838352       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-071527_a302cde5-9b60-4bf7-a701-b911a4990481!
	W1101 10:43:10.840124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:10.843828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:43:10.938616       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-071527_a302cde5-9b60-4bf7-a701-b911a4990481!
	W1101 10:43:12.846754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:12.851659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:14.855125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:14.859239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-071527 -n embed-certs-071527
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-071527 -n embed-certs-071527: exit status 2 (337.521777ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-071527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-336923 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-336923 --alsologtostderr -v=1: exit status 80 (2.117133883s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-336923 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:43:32.128224  387648 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:32.128544  387648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:32.128555  387648 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:32.128559  387648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:32.128749  387648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:43:32.129015  387648 out.go:368] Setting JSON to false
	I1101 10:43:32.129052  387648 mustload.go:66] Loading cluster: newest-cni-336923
	I1101 10:43:32.129382  387648 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:32.129861  387648 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:32.146962  387648 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:32.147201  387648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:32.204904  387648 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 10:43:32.193748238 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:32.205540  387648 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-336923 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:43:32.207171  387648 out.go:179] * Pausing node newest-cni-336923 ... 
	I1101 10:43:32.208385  387648 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:32.208737  387648 ssh_runner.go:195] Run: systemctl --version
	I1101 10:43:32.208791  387648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:32.225923  387648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:32.324798  387648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:32.336776  387648 pause.go:52] kubelet running: true
	I1101 10:43:32.336842  387648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:32.471326  387648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:32.471413  387648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:32.540470  387648 cri.go:89] found id: "8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a"
	I1101 10:43:32.540523  387648 cri.go:89] found id: "a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901"
	I1101 10:43:32.540530  387648 cri.go:89] found id: "05876bdf52963039d74359b1c6e86efb9d3a0b4785c1d0b737d8e32f606c95b6"
	I1101 10:43:32.540535  387648 cri.go:89] found id: "96ea6466b389e2e86c6d49b93414ad564bab4d3aff97667a22ec3b36e4aa6693"
	I1101 10:43:32.540551  387648 cri.go:89] found id: "303e5cbe1c98eba4a68058ea2e32a00cf43f2a9a95f999a8979fc1b9c1e2d5ed"
	I1101 10:43:32.540554  387648 cri.go:89] found id: "b6e317eecde60778bc7ea3d748bfc59c8cc1f778c663d1b00a08818d50a539f2"
	I1101 10:43:32.540557  387648 cri.go:89] found id: ""
	I1101 10:43:32.540596  387648 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:32.552632  387648 retry.go:31] will retry after 290.320419ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:32Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:43:32.843120  387648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:32.855736  387648 pause.go:52] kubelet running: false
	I1101 10:43:32.855783  387648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:32.970507  387648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:32.970586  387648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:33.035535  387648 cri.go:89] found id: "8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a"
	I1101 10:43:33.035556  387648 cri.go:89] found id: "a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901"
	I1101 10:43:33.035560  387648 cri.go:89] found id: "05876bdf52963039d74359b1c6e86efb9d3a0b4785c1d0b737d8e32f606c95b6"
	I1101 10:43:33.035564  387648 cri.go:89] found id: "96ea6466b389e2e86c6d49b93414ad564bab4d3aff97667a22ec3b36e4aa6693"
	I1101 10:43:33.035569  387648 cri.go:89] found id: "303e5cbe1c98eba4a68058ea2e32a00cf43f2a9a95f999a8979fc1b9c1e2d5ed"
	I1101 10:43:33.035573  387648 cri.go:89] found id: "b6e317eecde60778bc7ea3d748bfc59c8cc1f778c663d1b00a08818d50a539f2"
	I1101 10:43:33.035578  387648 cri.go:89] found id: ""
	I1101 10:43:33.035621  387648 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:33.047335  387648 retry.go:31] will retry after 265.687696ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:33Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:43:33.313722  387648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:33.326484  387648 pause.go:52] kubelet running: false
	I1101 10:43:33.326575  387648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:33.445652  387648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:33.445730  387648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:33.513012  387648 cri.go:89] found id: "8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a"
	I1101 10:43:33.513034  387648 cri.go:89] found id: "a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901"
	I1101 10:43:33.513038  387648 cri.go:89] found id: "05876bdf52963039d74359b1c6e86efb9d3a0b4785c1d0b737d8e32f606c95b6"
	I1101 10:43:33.513041  387648 cri.go:89] found id: "96ea6466b389e2e86c6d49b93414ad564bab4d3aff97667a22ec3b36e4aa6693"
	I1101 10:43:33.513043  387648 cri.go:89] found id: "303e5cbe1c98eba4a68058ea2e32a00cf43f2a9a95f999a8979fc1b9c1e2d5ed"
	I1101 10:43:33.513047  387648 cri.go:89] found id: "b6e317eecde60778bc7ea3d748bfc59c8cc1f778c663d1b00a08818d50a539f2"
	I1101 10:43:33.513050  387648 cri.go:89] found id: ""
	I1101 10:43:33.513096  387648 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:33.524290  387648 retry.go:31] will retry after 442.576189ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:33Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:43:33.968016  387648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:33.985541  387648 pause.go:52] kubelet running: false
	I1101 10:43:33.985617  387648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:43:34.101116  387648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:43:34.101199  387648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:43:34.165735  387648 cri.go:89] found id: "8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a"
	I1101 10:43:34.165755  387648 cri.go:89] found id: "a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901"
	I1101 10:43:34.165759  387648 cri.go:89] found id: "05876bdf52963039d74359b1c6e86efb9d3a0b4785c1d0b737d8e32f606c95b6"
	I1101 10:43:34.165762  387648 cri.go:89] found id: "96ea6466b389e2e86c6d49b93414ad564bab4d3aff97667a22ec3b36e4aa6693"
	I1101 10:43:34.165764  387648 cri.go:89] found id: "303e5cbe1c98eba4a68058ea2e32a00cf43f2a9a95f999a8979fc1b9c1e2d5ed"
	I1101 10:43:34.165767  387648 cri.go:89] found id: "b6e317eecde60778bc7ea3d748bfc59c8cc1f778c663d1b00a08818d50a539f2"
	I1101 10:43:34.165770  387648 cri.go:89] found id: ""
	I1101 10:43:34.165808  387648 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:43:34.179378  387648 out.go:203] 
	W1101 10:43:34.180524  387648 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:43:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:43:34.180541  387648 out.go:285] * 
	* 
	W1101 10:43:34.185298  387648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:43:34.186556  387648 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-336923 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-336923
helpers_test.go:243: (dbg) docker inspect newest-cni-336923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb",
	        "Created": "2025-11-01T10:42:50.393754457Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:43:19.804179247Z",
	            "FinishedAt": "2025-11-01T10:43:18.766629758Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/hosts",
	        "LogPath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb-json.log",
	        "Name": "/newest-cni-336923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-336923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-336923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb",
	                "LowerDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-336923",
	                "Source": "/var/lib/docker/volumes/newest-cni-336923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-336923",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-336923",
	                "name.minikube.sigs.k8s.io": "newest-cni-336923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ef677e4f12a591104bc8d57d645ab9fbca4cbb183fe2cecf0362f087c592a7c9",
	            "SandboxKey": "/var/run/docker/netns/ef677e4f12a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-336923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:b6:05:90:16:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e144a937d06e262f0e3ad8a76371e64c3d6dd9439eb433489836f813e4181b37",
	                    "EndpointID": "a27994804f4253fdaa6c065fc28da209f39a1861a0141f05c219908fcbef66ce",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-336923",
	                        "f7f97f7d0c24"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336923 -n newest-cni-336923
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336923 -n newest-cni-336923: exit status 2 (313.563471ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-336923 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433711 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-433711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ image   │ embed-certs-071527 image list --format=json                                                                                                                                                                                                   │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p embed-certs-071527 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ stop    │ -p newest-cni-336923 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p embed-certs-071527                                                                                                                                                                                                                         │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-336923 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p embed-certs-071527                                                                                                                                                                                                                         │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ image   │ newest-cni-336923 image list --format=json                                                                                                                                                                                                    │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p newest-cni-336923 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:43:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:43:19.562995  385211 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:19.563161  385211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:19.563172  385211 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:19.563179  385211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:19.563441  385211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:43:19.563922  385211 out.go:368] Setting JSON to false
	I1101 10:43:19.565279  385211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8739,"bootTime":1761985060,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:43:19.565370  385211 start.go:143] virtualization: kvm guest
	I1101 10:43:19.567110  385211 out.go:179] * [newest-cni-336923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:43:19.568689  385211 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:43:19.568720  385211 notify.go:221] Checking for updates...
	I1101 10:43:19.570960  385211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:43:19.572305  385211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:19.573417  385211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:43:19.574730  385211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:43:19.576048  385211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:43:19.577590  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:19.578285  385211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:43:19.605771  385211 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:43:19.605883  385211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:19.665006  385211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:43:19.654595853 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:19.665205  385211 docker.go:319] overlay module found
	I1101 10:43:19.666673  385211 out.go:179] * Using the docker driver based on existing profile
	I1101 10:43:19.667653  385211 start.go:309] selected driver: docker
	I1101 10:43:19.667667  385211 start.go:930] validating driver "docker" against &{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:19.667749  385211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:43:19.668238  385211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:19.729845  385211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:43:19.718798371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:19.730108  385211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:43:19.730135  385211 cni.go:84] Creating CNI manager for ""
	I1101 10:43:19.730186  385211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:19.730221  385211 start.go:353] cluster config:
	{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:19.731861  385211 out.go:179] * Starting "newest-cni-336923" primary control-plane node in "newest-cni-336923" cluster
	I1101 10:43:19.732887  385211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:43:19.733870  385211 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:43:19.734910  385211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:19.734977  385211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:43:19.734992  385211 cache.go:59] Caching tarball of preloaded images
	I1101 10:43:19.735037  385211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:43:19.735072  385211 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:43:19.735085  385211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:43:19.735216  385211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json ...
	I1101 10:43:19.759288  385211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:43:19.759307  385211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:43:19.759322  385211 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:43:19.759351  385211 start.go:360] acquireMachinesLock for newest-cni-336923: {Name:mk078b1ded54eaee8a26288c21e4405f07864b1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:43:19.759448  385211 start.go:364] duration metric: took 51.416µs to acquireMachinesLock for "newest-cni-336923"
	I1101 10:43:19.759473  385211 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:43:19.759483  385211 fix.go:54] fixHost starting: 
	I1101 10:43:19.759794  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:19.777831  385211 fix.go:112] recreateIfNeeded on newest-cni-336923: state=Stopped err=<nil>
	W1101 10:43:19.777879  385211 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:43:19.580383  380170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:43:19.585617  380170 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:43:19.586638  380170 api_server.go:141] control plane version: v1.34.1
	I1101 10:43:19.586665  380170 api_server.go:131] duration metric: took 506.95003ms to wait for apiserver health ...
	I1101 10:43:19.586676  380170 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:43:19.590035  380170 system_pods.go:59] 8 kube-system pods found
	I1101 10:43:19.590088  380170 system_pods.go:61] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:19.590104  380170 system_pods.go:61] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:19.590111  380170 system_pods.go:61] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:43:19.590119  380170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:19.590131  380170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:19.590137  380170 system_pods.go:61] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:43:19.590144  380170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:19.590149  380170 system_pods.go:61] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Running
	I1101 10:43:19.590164  380170 system_pods.go:74] duration metric: took 3.480437ms to wait for pod list to return data ...
	I1101 10:43:19.590176  380170 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:43:19.592779  380170 default_sa.go:45] found service account: "default"
	I1101 10:43:19.592800  380170 default_sa.go:55] duration metric: took 2.617606ms for default service account to be created ...
	I1101 10:43:19.592810  380170 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:43:19.595723  380170 system_pods.go:86] 8 kube-system pods found
	I1101 10:43:19.595754  380170 system_pods.go:89] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:19.595765  380170 system_pods.go:89] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:19.595781  380170 system_pods.go:89] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:43:19.595789  380170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:19.595799  380170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:19.595805  380170 system_pods.go:89] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:43:19.595813  380170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:19.595818  380170 system_pods.go:89] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Running
	I1101 10:43:19.595829  380170 system_pods.go:126] duration metric: took 3.011558ms to wait for k8s-apps to be running ...
	I1101 10:43:19.595837  380170 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:43:19.595885  380170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:19.612808  380170 system_svc.go:56] duration metric: took 16.960672ms WaitForService to wait for kubelet
	I1101 10:43:19.612844  380170 kubeadm.go:587] duration metric: took 2.485063342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:43:19.612867  380170 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:43:19.616298  380170 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:43:19.616333  380170 node_conditions.go:123] node cpu capacity is 8
	I1101 10:43:19.616349  380170 node_conditions.go:105] duration metric: took 3.477231ms to run NodePressure ...
	I1101 10:43:19.616364  380170 start.go:242] waiting for startup goroutines ...
	I1101 10:43:19.616379  380170 start.go:247] waiting for cluster config update ...
	I1101 10:43:19.616401  380170 start.go:256] writing updated cluster config ...
	I1101 10:43:19.616752  380170 ssh_runner.go:195] Run: rm -f paused
	I1101 10:43:19.620456  380170 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:43:19.623291  380170 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v7tvt" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:43:21.628140  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:23.630013  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:19.779427  385211 out.go:252] * Restarting existing docker container for "newest-cni-336923" ...
	I1101 10:43:19.779489  385211 cli_runner.go:164] Run: docker start newest-cni-336923
	I1101 10:43:20.014386  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:20.033355  385211 kic.go:430] container "newest-cni-336923" state is running.
	I1101 10:43:20.033776  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:20.051719  385211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json ...
	I1101 10:43:20.051923  385211 machine.go:94] provisionDockerMachine start ...
	I1101 10:43:20.051985  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:20.069646  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:20.069891  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:20.069906  385211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:43:20.070476  385211 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58296->127.0.0.1:33133: read: connection reset by peer
	I1101 10:43:23.216448  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-336923
	
	I1101 10:43:23.216483  385211 ubuntu.go:182] provisioning hostname "newest-cni-336923"
	I1101 10:43:23.216574  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:23.239604  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:23.240021  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:23.240050  385211 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-336923 && echo "newest-cni-336923" | sudo tee /etc/hostname
	I1101 10:43:23.406412  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-336923
	
	I1101 10:43:23.406490  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:23.430458  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:23.430817  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:23.430849  385211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-336923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-336923/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-336923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:43:23.584527  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:43:23.584561  385211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:43:23.584586  385211 ubuntu.go:190] setting up certificates
	I1101 10:43:23.584599  385211 provision.go:84] configureAuth start
	I1101 10:43:23.584671  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:23.606864  385211 provision.go:143] copyHostCerts
	I1101 10:43:23.606939  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:43:23.606959  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:43:23.607044  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:43:23.607184  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:43:23.607198  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:43:23.607244  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:43:23.607352  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:43:23.607365  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:43:23.607400  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:43:23.607554  385211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.newest-cni-336923 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-336923]
	I1101 10:43:24.105760  385211 provision.go:177] copyRemoteCerts
	I1101 10:43:24.105843  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:43:24.105901  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.123234  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.223265  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:43:24.240358  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:43:24.257152  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:43:24.273645  385211 provision.go:87] duration metric: took 689.027992ms to configureAuth
	I1101 10:43:24.273673  385211 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:43:24.273876  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:24.274012  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.291114  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:24.291345  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:24.291367  385211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:43:24.560882  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:43:24.560915  385211 machine.go:97] duration metric: took 4.508974654s to provisionDockerMachine
	I1101 10:43:24.560932  385211 start.go:293] postStartSetup for "newest-cni-336923" (driver="docker")
	I1101 10:43:24.560965  385211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:43:24.561042  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:43:24.561104  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.581756  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.682079  385211 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:43:24.685513  385211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:43:24.685538  385211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:43:24.685552  385211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:43:24.685593  385211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:43:24.685674  385211 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:43:24.685761  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:43:24.693293  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:43:24.710868  385211 start.go:296] duration metric: took 149.921905ms for postStartSetup
	I1101 10:43:24.710959  385211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:43:24.711009  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.727702  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.823431  385211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:43:24.828000  385211 fix.go:56] duration metric: took 5.068504403s for fixHost
	I1101 10:43:24.828024  385211 start.go:83] releasing machines lock for "newest-cni-336923", held for 5.068561902s
	I1101 10:43:24.828091  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:24.845157  385211 ssh_runner.go:195] Run: cat /version.json
	I1101 10:43:24.845213  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.845273  385211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:43:24.845342  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.863014  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.863284  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:25.013866  385211 ssh_runner.go:195] Run: systemctl --version
	I1101 10:43:25.020582  385211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:43:25.057023  385211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:43:25.062007  385211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:43:25.062060  385211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:43:25.070026  385211 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:43:25.070050  385211 start.go:496] detecting cgroup driver to use...
	I1101 10:43:25.070082  385211 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:43:25.070139  385211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:43:25.085382  385211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:43:25.098030  385211 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:43:25.098075  385211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:43:25.111846  385211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:43:25.123714  385211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:43:25.203249  385211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:43:25.286193  385211 docker.go:234] disabling docker service ...
	I1101 10:43:25.286274  385211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:43:25.300278  385211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:43:25.312521  385211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:43:25.424819  385211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:43:25.535913  385211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:43:25.552035  385211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:43:25.570081  385211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:43:25.570141  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.581526  385211 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:43:25.581590  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.592157  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.602648  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.613285  385211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:43:25.623297  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.634452  385211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.644745  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.654826  385211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:43:25.663288  385211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:43:25.672545  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:25.774692  385211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:43:26.389931  385211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:43:26.390002  385211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:43:26.395453  385211 start.go:564] Will wait 60s for crictl version
	I1101 10:43:26.395532  385211 ssh_runner.go:195] Run: which crictl
	I1101 10:43:26.400212  385211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:43:26.432448  385211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:43:26.432553  385211 ssh_runner.go:195] Run: crio --version
	I1101 10:43:26.469531  385211 ssh_runner.go:195] Run: crio --version
	I1101 10:43:26.509599  385211 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:43:26.510918  385211 cli_runner.go:164] Run: docker network inspect newest-cni-336923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:43:26.532517  385211 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:43:26.537439  385211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:43:26.551090  385211 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:43:26.552134  385211 kubeadm.go:884] updating cluster {Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:43:26.552309  385211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:26.552371  385211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:43:26.592302  385211 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:43:26.592326  385211 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:43:26.592385  385211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:43:26.623998  385211 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:43:26.624025  385211 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:43:26.624035  385211 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:43:26.624170  385211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-336923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:43:26.624265  385211 ssh_runner.go:195] Run: crio config
	I1101 10:43:26.674400  385211 cni.go:84] Creating CNI manager for ""
	I1101 10:43:26.674422  385211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:26.674440  385211 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:43:26.674462  385211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-336923 NodeName:newest-cni-336923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:43:26.674609  385211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-336923"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:43:26.674672  385211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:43:26.684472  385211 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:43:26.684555  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:43:26.693298  385211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:43:26.708330  385211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:43:26.723417  385211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 10:43:26.738609  385211 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:43:26.743102  385211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:43:26.754490  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:26.860261  385211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:43:26.888382  385211 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923 for IP: 192.168.85.2
	I1101 10:43:26.888407  385211 certs.go:195] generating shared ca certs ...
	I1101 10:43:26.888429  385211 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:26.888637  385211 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:43:26.888701  385211 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:43:26.888718  385211 certs.go:257] generating profile certs ...
	I1101 10:43:26.888850  385211 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/client.key
	I1101 10:43:26.888933  385211 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.key.243c0d0d
	I1101 10:43:26.888995  385211 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.key
	I1101 10:43:26.889152  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:43:26.889197  385211 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:43:26.889212  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:43:26.889244  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:43:26.889284  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:43:26.889316  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:43:26.889372  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:43:26.890238  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:43:26.915760  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:43:26.940726  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:43:26.964835  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:43:26.990573  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:43:27.013067  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:43:27.036519  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:43:27.059066  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:43:27.081056  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:43:27.103980  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:43:27.126512  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:43:27.149504  385211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:43:27.165265  385211 ssh_runner.go:195] Run: openssl version
	I1101 10:43:27.172420  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:43:27.183610  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.188666  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.188723  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.245581  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:43:27.256943  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:43:27.268346  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.273383  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.273441  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.329279  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:43:27.340693  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:43:27.351249  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.355345  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.355403  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.414180  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:43:27.426101  385211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:43:27.430861  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:43:27.486012  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:43:27.563512  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:43:27.622833  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:43:27.682140  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:43:27.737630  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:43:27.793353  385211 kubeadm.go:401] StartCluster: {Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:27.793475  385211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:43:27.793563  385211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:43:27.831673  385211 cri.go:89] found id: ""
	I1101 10:43:27.831737  385211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:43:27.840098  385211 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:43:27.840120  385211 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:43:27.840169  385211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:43:27.847934  385211 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:43:27.848632  385211 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-336923" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:27.848984  385211 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-58021/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-336923" cluster setting kubeconfig missing "newest-cni-336923" context setting]
	I1101 10:43:27.849613  385211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.876052  385211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:43:27.884631  385211 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:43:27.884663  385211 kubeadm.go:602] duration metric: took 44.535917ms to restartPrimaryControlPlane
	I1101 10:43:27.884674  385211 kubeadm.go:403] duration metric: took 91.333695ms to StartCluster
	I1101 10:43:27.884693  385211 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.884762  385211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:27.885777  385211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.921113  385211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:43:27.921231  385211 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:43:27.921378  385211 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-336923"
	I1101 10:43:27.921390  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:27.921404  385211 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-336923"
	W1101 10:43:27.921414  385211 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:43:27.921408  385211 addons.go:70] Setting dashboard=true in profile "newest-cni-336923"
	I1101 10:43:27.921422  385211 addons.go:70] Setting default-storageclass=true in profile "newest-cni-336923"
	I1101 10:43:27.921443  385211 addons.go:239] Setting addon dashboard=true in "newest-cni-336923"
	I1101 10:43:27.921448  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	W1101 10:43:27.921455  385211 addons.go:248] addon dashboard should already be in state true
	I1101 10:43:27.921458  385211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-336923"
	I1101 10:43:27.921512  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:27.921788  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.921874  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.921878  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.922943  385211 out.go:179] * Verifying Kubernetes components...
	I1101 10:43:27.926542  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:27.948055  385211 addons.go:239] Setting addon default-storageclass=true in "newest-cni-336923"
	W1101 10:43:27.948083  385211 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:43:27.948115  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:27.948592  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.949931  385211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:43:27.951582  385211 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:43:27.951591  385211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:27.951717  385211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:43:27.951785  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.952894  385211 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1101 10:43:25.630441  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:28.132297  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:27.954213  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:43:27.954233  385211 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:43:27.954294  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.975475  385211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:27.975514  385211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:43:27.975584  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.976243  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:27.982565  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:28.003386  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:28.069464  385211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:43:28.097817  385211 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:43:28.097875  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:28.097884  385211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:43:28.099383  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:43:28.099403  385211 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:43:28.122313  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:28.122561  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:43:28.122586  385211 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:43:28.149303  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:43:28.149330  385211 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:43:28.174173  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:43:28.174199  385211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:43:28.195857  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:43:28.195884  385211 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1101 10:43:28.201388  385211 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.201435  385211 retry.go:31] will retry after 259.751612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:43:28.218472  385211 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.218525  385211 retry.go:31] will retry after 370.922823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.219522  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:43:28.219550  385211 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:43:28.237475  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:43:28.237512  385211 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:43:28.253898  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:43:28.253926  385211 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:43:28.267526  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:43:28.267552  385211 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:43:28.280403  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:43:28.462006  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:28.590605  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:28.598216  385211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:43:30.341998  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.061546929s)
	I1101 10:43:30.343194  385211 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-336923 addons enable metrics-server
	
	I1101 10:43:30.427213  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.96517409s)
	I1101 10:43:30.427275  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.836635399s)
	I1101 10:43:30.427306  385211 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.829057914s)
	I1101 10:43:30.427330  385211 api_server.go:72] duration metric: took 2.506172151s to wait for apiserver process to appear ...
	I1101 10:43:30.427336  385211 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:43:30.427357  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:30.434031  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:43:30.434053  385211 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:43:30.442027  385211 out.go:179] * Enabled addons: dashboard, storage-provisioner, default-storageclass
	I1101 10:43:30.443021  385211 addons.go:515] duration metric: took 2.521798237s for enable addons: enabled=[dashboard storage-provisioner default-storageclass]
	I1101 10:43:30.928254  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:30.933188  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:43:30.933224  385211 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:43:31.427738  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:31.432031  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:43:31.433126  385211 api_server.go:141] control plane version: v1.34.1
	I1101 10:43:31.433156  385211 api_server.go:131] duration metric: took 1.005812081s to wait for apiserver health ...
	I1101 10:43:31.433168  385211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:43:31.436835  385211 system_pods.go:59] 8 kube-system pods found
	I1101 10:43:31.436864  385211 system_pods.go:61] "coredns-66bc5c9577-j9pcl" [9244c7b5-e2f4-44ec-a7c9-f337e044f46e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:43:31.436872  385211 system_pods.go:61] "etcd-newest-cni-336923" [e4c9b0a5-3bfb-4e36-bc6e-fcfe9945c1f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:31.436882  385211 system_pods.go:61] "kindnet-6lbk4" [e62d231c-e1d5-4e4a-81e1-0be9614e211d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:43:31.436890  385211 system_pods.go:61] "kube-apiserver-newest-cni-336923" [f7c5c26f-4f73-459f-b72a-79f07879ab50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:31.436897  385211 system_pods.go:61] "kube-controller-manager-newest-cni-336923" [4d758565-1733-499f-ad35-853e88c03a13] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:31.436903  385211 system_pods.go:61] "kube-proxy-z65pd" [5a6496ad-eaf7-4f96-af7e-0dd5f88346c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:43:31.436910  385211 system_pods.go:61] "kube-scheduler-newest-cni-336923" [03d3cde4-6638-4fe6-949a-26f05cd8dfac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:31.436915  385211 system_pods.go:61] "storage-provisioner" [7165902e-833a-41e9-84eb-cf31f057f373] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:43:31.436924  385211 system_pods.go:74] duration metric: took 3.751261ms to wait for pod list to return data ...
	I1101 10:43:31.436933  385211 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:43:31.439538  385211 default_sa.go:45] found service account: "default"
	I1101 10:43:31.439560  385211 default_sa.go:55] duration metric: took 2.618436ms for default service account to be created ...
	I1101 10:43:31.439574  385211 kubeadm.go:587] duration metric: took 3.518414216s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:43:31.439596  385211 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:43:31.442059  385211 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:43:31.442085  385211 node_conditions.go:123] node cpu capacity is 8
	I1101 10:43:31.442098  385211 node_conditions.go:105] duration metric: took 2.496441ms to run NodePressure ...
	I1101 10:43:31.442113  385211 start.go:242] waiting for startup goroutines ...
	I1101 10:43:31.442127  385211 start.go:247] waiting for cluster config update ...
	I1101 10:43:31.442144  385211 start.go:256] writing updated cluster config ...
	I1101 10:43:31.442423  385211 ssh_runner.go:195] Run: rm -f paused
	I1101 10:43:31.493548  385211 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:43:31.495480  385211 out.go:179] * Done! kubectl is now configured to use "newest-cni-336923" cluster and "default" namespace by default
	W1101 10:43:30.628520  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:32.629114  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.277727206Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=edc0c2ed-61ed-430f-94a9-ce0c1a77fa9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.279082812Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.279692304Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.280079338Z" level=info msg="Ran pod sandbox 787e133010dd4de748839ca96737ba9c96585f7c3163eb785045d6004d71ec73 with infra container: kube-system/kube-proxy-z65pd/POD" id=a5b8fe23-3f01-4edb-a5c5-c46c4b23adc5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.28056486Z" level=info msg="Ran pod sandbox 477a36edde5a6a6cffdcac72969168c544491bc93057efd4a91771980d7bcc95 with infra container: kube-system/kindnet-6lbk4/POD" id=edc0c2ed-61ed-430f-94a9-ce0c1a77fa9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.281330627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=780f8477-c061-42a5-a0e3-12c00bdb8d83 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.281838687Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e2862f67-312f-48e0-b2f2-eeb572bab09f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.025310658Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9b032264-827d-4332-849a-ab46bc413a7c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.026392835Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1bbeceb2-0eed-49f2-86d3-bc2c17ece443 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.027581695Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=076da46c-7f81-4ba1-9f92-81c09c2497ef name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.028669903Z" level=info msg="Creating container: kube-system/kindnet-6lbk4/kindnet-cni" id=f7a7002e-edf7-4c44-a726-c715fab1fe73 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.028713078Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2e39006e-bff5-43f1-b03f-3ee7a469b0da name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.028787957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.03047814Z" level=info msg="Creating container: kube-system/kube-proxy-z65pd/kube-proxy" id=038a7908-63de-41fb-b809-34ac291fd30c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.030626383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.035128828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.035750449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.037732076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.03832609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.066595902Z" level=info msg="Created container a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901: kube-system/kindnet-6lbk4/kindnet-cni" id=f7a7002e-edf7-4c44-a726-c715fab1fe73 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.067344863Z" level=info msg="Starting container: a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901" id=b418237a-8677-4646-863f-5ec6aeaddc88 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.069605106Z" level=info msg="Started container" PID=1054 containerID=a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901 description=kube-system/kindnet-6lbk4/kindnet-cni id=b418237a-8677-4646-863f-5ec6aeaddc88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=477a36edde5a6a6cffdcac72969168c544491bc93057efd4a91771980d7bcc95
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.071636817Z" level=info msg="Created container 8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a: kube-system/kube-proxy-z65pd/kube-proxy" id=038a7908-63de-41fb-b809-34ac291fd30c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.072283384Z" level=info msg="Starting container: 8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a" id=56648dda-dc53-4bc9-af14-1be87cca2c26 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.075265637Z" level=info msg="Started container" PID=1055 containerID=8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a description=kube-system/kube-proxy-z65pd/kube-proxy id=56648dda-dc53-4bc9-af14-1be87cca2c26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=787e133010dd4de748839ca96737ba9c96585f7c3163eb785045d6004d71ec73
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8b2f249be57c0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   787e133010dd4       kube-proxy-z65pd                            kube-system
	a6d88d23ae3a6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   477a36edde5a6       kindnet-6lbk4                               kube-system
	05876bdf52963       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   b3c1d6a861140       kube-apiserver-newest-cni-336923            kube-system
	96ea6466b389e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   d89a0b0caf2be       etcd-newest-cni-336923                      kube-system
	303e5cbe1c98e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   9e3222029cb57       kube-controller-manager-newest-cni-336923   kube-system
	b6e317eecde60       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   f64d5e94e37da       kube-scheduler-newest-cni-336923            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-336923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-336923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=newest-cni-336923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_43_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:43:00 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-336923
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:43:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:43:30 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:43:30 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:43:30 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:43:30 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-336923
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6f793bc2-07ee-4607-b191-dc232242ea47
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-336923                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-6lbk4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-336923             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-336923    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-z65pd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-336923             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node newest-cni-336923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node newest-cni-336923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node newest-cni-336923 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-336923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node newest-cni-336923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     33s                kubelet          Node newest-cni-336923 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29s                node-controller  Node newest-cni-336923 event: Registered Node newest-cni-336923 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 9s)    kubelet          Node newest-cni-336923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 9s)    kubelet          Node newest-cni-336923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x8 over 9s)    kubelet          Node newest-cni-336923 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-336923 event: Registered Node newest-cni-336923 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	[Nov 1 10:42] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [96ea6466b389e2e86c6d49b93414ad564bab4d3aff97667a22ec3b36e4aa6693] <==
	{"level":"warn","ts":"2025-11-01T10:43:29.266887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.274124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.281395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.289773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.296558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.303433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.309761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.315563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.321435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.328085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.334367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.340636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.347391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.353438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.360168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.366713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.372612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.378912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.393000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.399054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.406914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.428737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.435340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.442296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.484570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:43:35 up  2:25,  0 user,  load average: 4.49, 3.99, 2.63
	Linux newest-cni-336923 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901] <==
	I1101 10:43:31.257591       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:43:31.257841       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:43:31.257989       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:43:31.258010       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:43:31.258020       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:43:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:43:31.459382       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:43:31.459410       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:43:31.459422       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:43:31.460327       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [05876bdf52963039d74359b1c6e86efb9d3a0b4785c1d0b737d8e32f606c95b6] <==
	I1101 10:43:29.964978       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:43:29.964762       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:43:29.964747       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:43:29.966688       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:43:29.970646       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:43:29.970944       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:43:29.977941       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:43:29.977970       1 policy_source.go:240] refreshing policies
	I1101 10:43:29.984651       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:43:29.984692       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:43:29.984700       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:43:29.984707       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:43:29.984713       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:43:30.015320       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:43:30.018262       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:43:30.235991       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:43:30.261598       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:43:30.284809       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:43:30.291533       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:43:30.326602       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.99.157"}
	I1101 10:43:30.337155       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.83.130"}
	I1101 10:43:30.866334       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:43:33.369919       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:43:33.712904       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:43:33.762640       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [303e5cbe1c98eba4a68058ea2e32a00cf43f2a9a95f999a8979fc1b9c1e2d5ed] <==
	I1101 10:43:33.335786       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:43:33.338099       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:43:33.344332       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:43:33.345488       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:43:33.345564       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:43:33.345673       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:43:33.348819       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:43:33.359560       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:43:33.360371       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:43:33.360405       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:43:33.360423       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:43:33.360463       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:43:33.360520       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:43:33.360410       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:43:33.360745       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:43:33.363560       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:43:33.364878       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:43:33.364958       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:43:33.367162       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:43:33.368546       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:43:33.370825       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:43:33.371944       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:43:33.374064       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:43:33.376000       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:43:33.382090       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a] <==
	I1101 10:43:31.108195       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:43:31.185129       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:43:31.285787       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:43:31.285832       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:43:31.285930       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:43:31.303205       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:43:31.303263       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:43:31.308028       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:43:31.308441       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:43:31.308477       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:31.310726       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:43:31.310769       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:43:31.310830       1 config.go:200] "Starting service config controller"
	I1101 10:43:31.310858       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:43:31.310830       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:43:31.310895       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:43:31.310947       1 config.go:309] "Starting node config controller"
	I1101 10:43:31.310969       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:43:31.310977       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:43:31.411955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:43:31.412006       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:43:31.411998       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b6e317eecde60778bc7ea3d748bfc59c8cc1f778c663d1b00a08818d50a539f2] <==
	I1101 10:43:28.883789       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:43:29.906091       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:43:29.906133       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:43:29.906146       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:43:29.906155       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:43:29.932806       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:43:29.932904       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:29.935781       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:29.935885       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:29.936171       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:43:29.936206       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:43:30.037633       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013101     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a6496ad-eaf7-4f96-af7e-0dd5f88346c3-lib-modules\") pod \"kube-proxy-z65pd\" (UID: \"5a6496ad-eaf7-4f96-af7e-0dd5f88346c3\") " pod="kube-system/kube-proxy-z65pd"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013159     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e62d231c-e1d5-4e4a-81e1-0be9614e211d-cni-cfg\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013193     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e62d231c-e1d5-4e4a-81e1-0be9614e211d-lib-modules\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013325     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a6496ad-eaf7-4f96-af7e-0dd5f88346c3-xtables-lock\") pod \"kube-proxy-z65pd\" (UID: \"5a6496ad-eaf7-4f96-af7e-0dd5f88346c3\") " pod="kube-system/kube-proxy-z65pd"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013680     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e62d231c-e1d5-4e4a-81e1-0be9614e211d-xtables-lock\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.015109     662 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.015212     662 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.015243     662 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.016752     662 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.021751     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.021876     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.022130     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.022204     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.034779     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-336923\" already exists" pod="kube-system/kube-controller-manager-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.034808     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-336923\" already exists" pod="kube-system/kube-scheduler-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.034884     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-336923\" already exists" pod="kube-system/kube-apiserver-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.034808     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-336923\" already exists" pod="kube-system/etcd-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.281937     662 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-z65pd_kube-system(5a6496ad-eaf7-4f96-af7e-0dd5f88346c3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.281998     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-z65pd" podUID="5a6496ad-eaf7-4f96-af7e-0dd5f88346c3"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.282344     662 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-6lbk4_kube-system(e62d231c-e1d5-4e4a-81e1-0be9614e211d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.283519     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-6lbk4" podUID="e62d231c-e1d5-4e4a-81e1-0be9614e211d"
	Nov 01 10:43:32 newest-cni-336923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:43:32 newest-cni-336923 kubelet[662]: I1101 10:43:32.452312     662 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 10:43:32 newest-cni-336923 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:43:32 newest-cni-336923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-336923 -n newest-cni-336923
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-336923 -n newest-cni-336923: exit status 2 (325.867417ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-336923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-j9pcl storage-provisioner dashboard-metrics-scraper-6ffb444bf9-f454l kubernetes-dashboard-855c9754f9-qdhnj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-336923 describe pod coredns-66bc5c9577-j9pcl storage-provisioner dashboard-metrics-scraper-6ffb444bf9-f454l kubernetes-dashboard-855c9754f9-qdhnj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-336923 describe pod coredns-66bc5c9577-j9pcl storage-provisioner dashboard-metrics-scraper-6ffb444bf9-f454l kubernetes-dashboard-855c9754f9-qdhnj: exit status 1 (62.004062ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-j9pcl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-f454l" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qdhnj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-336923 describe pod coredns-66bc5c9577-j9pcl storage-provisioner dashboard-metrics-scraper-6ffb444bf9-f454l kubernetes-dashboard-855c9754f9-qdhnj: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-336923
helpers_test.go:243: (dbg) docker inspect newest-cni-336923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb",
	        "Created": "2025-11-01T10:42:50.393754457Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:43:19.804179247Z",
	            "FinishedAt": "2025-11-01T10:43:18.766629758Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/hostname",
	        "HostsPath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/hosts",
	        "LogPath": "/var/lib/docker/containers/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb/f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb-json.log",
	        "Name": "/newest-cni-336923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-336923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-336923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f7f97f7d0c24fa90e9c11c08977a5c8c5d262a093fe795e905379aa0fca6c3eb",
	                "LowerDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98f14fe0a7fd5569b2d5ff51d7565e3b5a30ff46cfb917c74d0aef27f139bdd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-336923",
	                "Source": "/var/lib/docker/volumes/newest-cni-336923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-336923",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-336923",
	                "name.minikube.sigs.k8s.io": "newest-cni-336923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ef677e4f12a591104bc8d57d645ab9fbca4cbb183fe2cecf0362f087c592a7c9",
	            "SandboxKey": "/var/run/docker/netns/ef677e4f12a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-336923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:b6:05:90:16:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e144a937d06e262f0e3ad8a76371e64c3d6dd9439eb433489836f813e4181b37",
	                    "EndpointID": "a27994804f4253fdaa6c065fc28da209f39a1861a0141f05c219908fcbef66ce",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-336923",
	                        "f7f97f7d0c24"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336923 -n newest-cni-336923
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336923 -n newest-cni-336923: exit status 2 (316.087797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-336923 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ image   │ no-preload-753486 image list --format=json                                                                                                                                                                                                    │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p no-preload-753486 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433711 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-433711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ image   │ embed-certs-071527 image list --format=json                                                                                                                                                                                                   │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p embed-certs-071527 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ stop    │ -p newest-cni-336923 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p embed-certs-071527                                                                                                                                                                                                                         │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-336923 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p embed-certs-071527                                                                                                                                                                                                                         │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ image   │ newest-cni-336923 image list --format=json                                                                                                                                                                                                    │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p newest-cni-336923 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:43:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:43:19.562995  385211 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:19.563161  385211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:19.563172  385211 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:19.563179  385211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:19.563441  385211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:43:19.563922  385211 out.go:368] Setting JSON to false
	I1101 10:43:19.565279  385211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8739,"bootTime":1761985060,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:43:19.565370  385211 start.go:143] virtualization: kvm guest
	I1101 10:43:19.567110  385211 out.go:179] * [newest-cni-336923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:43:19.568689  385211 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:43:19.568720  385211 notify.go:221] Checking for updates...
	I1101 10:43:19.570960  385211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:43:19.572305  385211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:19.573417  385211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:43:19.574730  385211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:43:19.576048  385211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:43:19.577590  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:19.578285  385211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:43:19.605771  385211 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:43:19.605883  385211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:19.665006  385211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:43:19.654595853 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:19.665205  385211 docker.go:319] overlay module found
	I1101 10:43:19.666673  385211 out.go:179] * Using the docker driver based on existing profile
	I1101 10:43:19.667653  385211 start.go:309] selected driver: docker
	I1101 10:43:19.667667  385211 start.go:930] validating driver "docker" against &{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:19.667749  385211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:43:19.668238  385211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:19.729845  385211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:43:19.718798371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:19.730108  385211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:43:19.730135  385211 cni.go:84] Creating CNI manager for ""
	I1101 10:43:19.730186  385211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:19.730221  385211 start.go:353] cluster config:
	{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:19.731861  385211 out.go:179] * Starting "newest-cni-336923" primary control-plane node in "newest-cni-336923" cluster
	I1101 10:43:19.732887  385211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:43:19.733870  385211 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:43:19.734910  385211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:19.734977  385211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:43:19.734992  385211 cache.go:59] Caching tarball of preloaded images
	I1101 10:43:19.735037  385211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:43:19.735072  385211 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:43:19.735085  385211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:43:19.735216  385211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json ...
	I1101 10:43:19.759288  385211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:43:19.759307  385211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:43:19.759322  385211 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:43:19.759351  385211 start.go:360] acquireMachinesLock for newest-cni-336923: {Name:mk078b1ded54eaee8a26288c21e4405f07864b1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:43:19.759448  385211 start.go:364] duration metric: took 51.416µs to acquireMachinesLock for "newest-cni-336923"
	I1101 10:43:19.759473  385211 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:43:19.759483  385211 fix.go:54] fixHost starting: 
	I1101 10:43:19.759794  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:19.777831  385211 fix.go:112] recreateIfNeeded on newest-cni-336923: state=Stopped err=<nil>
	W1101 10:43:19.777879  385211 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:43:19.580383  380170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:43:19.585617  380170 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:43:19.586638  380170 api_server.go:141] control plane version: v1.34.1
	I1101 10:43:19.586665  380170 api_server.go:131] duration metric: took 506.95003ms to wait for apiserver health ...
	I1101 10:43:19.586676  380170 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:43:19.590035  380170 system_pods.go:59] 8 kube-system pods found
	I1101 10:43:19.590088  380170 system_pods.go:61] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:19.590104  380170 system_pods.go:61] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:19.590111  380170 system_pods.go:61] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:43:19.590119  380170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:19.590131  380170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:19.590137  380170 system_pods.go:61] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:43:19.590144  380170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:19.590149  380170 system_pods.go:61] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Running
	I1101 10:43:19.590164  380170 system_pods.go:74] duration metric: took 3.480437ms to wait for pod list to return data ...
	I1101 10:43:19.590176  380170 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:43:19.592779  380170 default_sa.go:45] found service account: "default"
	I1101 10:43:19.592800  380170 default_sa.go:55] duration metric: took 2.617606ms for default service account to be created ...
	I1101 10:43:19.592810  380170 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:43:19.595723  380170 system_pods.go:86] 8 kube-system pods found
	I1101 10:43:19.595754  380170 system_pods.go:89] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:19.595765  380170 system_pods.go:89] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:19.595781  380170 system_pods.go:89] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:43:19.595789  380170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:19.595799  380170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:19.595805  380170 system_pods.go:89] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:43:19.595813  380170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:19.595818  380170 system_pods.go:89] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Running
	I1101 10:43:19.595829  380170 system_pods.go:126] duration metric: took 3.011558ms to wait for k8s-apps to be running ...
	I1101 10:43:19.595837  380170 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:43:19.595885  380170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:19.612808  380170 system_svc.go:56] duration metric: took 16.960672ms WaitForService to wait for kubelet
	I1101 10:43:19.612844  380170 kubeadm.go:587] duration metric: took 2.485063342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:43:19.612867  380170 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:43:19.616298  380170 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:43:19.616333  380170 node_conditions.go:123] node cpu capacity is 8
	I1101 10:43:19.616349  380170 node_conditions.go:105] duration metric: took 3.477231ms to run NodePressure ...
	I1101 10:43:19.616364  380170 start.go:242] waiting for startup goroutines ...
	I1101 10:43:19.616379  380170 start.go:247] waiting for cluster config update ...
	I1101 10:43:19.616401  380170 start.go:256] writing updated cluster config ...
	I1101 10:43:19.616752  380170 ssh_runner.go:195] Run: rm -f paused
	I1101 10:43:19.620456  380170 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:43:19.623291  380170 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v7tvt" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:43:21.628140  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:23.630013  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:19.779427  385211 out.go:252] * Restarting existing docker container for "newest-cni-336923" ...
	I1101 10:43:19.779489  385211 cli_runner.go:164] Run: docker start newest-cni-336923
	I1101 10:43:20.014386  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:20.033355  385211 kic.go:430] container "newest-cni-336923" state is running.
	I1101 10:43:20.033776  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:20.051719  385211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json ...
	I1101 10:43:20.051923  385211 machine.go:94] provisionDockerMachine start ...
	I1101 10:43:20.051985  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:20.069646  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:20.069891  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:20.069906  385211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:43:20.070476  385211 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58296->127.0.0.1:33133: read: connection reset by peer
	I1101 10:43:23.216448  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-336923
	
	I1101 10:43:23.216483  385211 ubuntu.go:182] provisioning hostname "newest-cni-336923"
	I1101 10:43:23.216574  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:23.239604  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:23.240021  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:23.240050  385211 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-336923 && echo "newest-cni-336923" | sudo tee /etc/hostname
	I1101 10:43:23.406412  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-336923
	
	I1101 10:43:23.406490  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:23.430458  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:23.430817  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:23.430849  385211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-336923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-336923/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-336923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:43:23.584527  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:43:23.584561  385211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:43:23.584586  385211 ubuntu.go:190] setting up certificates
	I1101 10:43:23.584599  385211 provision.go:84] configureAuth start
	I1101 10:43:23.584671  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:23.606864  385211 provision.go:143] copyHostCerts
	I1101 10:43:23.606939  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:43:23.606959  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:43:23.607044  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:43:23.607184  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:43:23.607198  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:43:23.607244  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:43:23.607352  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:43:23.607365  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:43:23.607400  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:43:23.607554  385211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.newest-cni-336923 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-336923]
	I1101 10:43:24.105760  385211 provision.go:177] copyRemoteCerts
	I1101 10:43:24.105843  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:43:24.105901  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.123234  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.223265  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:43:24.240358  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:43:24.257152  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:43:24.273645  385211 provision.go:87] duration metric: took 689.027992ms to configureAuth
	I1101 10:43:24.273673  385211 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:43:24.273876  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:24.274012  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.291114  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:24.291345  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:24.291367  385211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:43:24.560882  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:43:24.560915  385211 machine.go:97] duration metric: took 4.508974654s to provisionDockerMachine
	I1101 10:43:24.560932  385211 start.go:293] postStartSetup for "newest-cni-336923" (driver="docker")
	I1101 10:43:24.560965  385211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:43:24.561042  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:43:24.561104  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.581756  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.682079  385211 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:43:24.685513  385211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:43:24.685538  385211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:43:24.685552  385211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:43:24.685593  385211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:43:24.685674  385211 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:43:24.685761  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:43:24.693293  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:43:24.710868  385211 start.go:296] duration metric: took 149.921905ms for postStartSetup
	I1101 10:43:24.710959  385211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:43:24.711009  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.727702  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.823431  385211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:43:24.828000  385211 fix.go:56] duration metric: took 5.068504403s for fixHost
	I1101 10:43:24.828024  385211 start.go:83] releasing machines lock for "newest-cni-336923", held for 5.068561902s
	I1101 10:43:24.828091  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:24.845157  385211 ssh_runner.go:195] Run: cat /version.json
	I1101 10:43:24.845213  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.845273  385211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:43:24.845342  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.863014  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.863284  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:25.013866  385211 ssh_runner.go:195] Run: systemctl --version
	I1101 10:43:25.020582  385211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:43:25.057023  385211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:43:25.062007  385211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:43:25.062060  385211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:43:25.070026  385211 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:43:25.070050  385211 start.go:496] detecting cgroup driver to use...
	I1101 10:43:25.070082  385211 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:43:25.070139  385211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:43:25.085382  385211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:43:25.098030  385211 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:43:25.098075  385211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:43:25.111846  385211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:43:25.123714  385211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:43:25.203249  385211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:43:25.286193  385211 docker.go:234] disabling docker service ...
	I1101 10:43:25.286274  385211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:43:25.300278  385211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:43:25.312521  385211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:43:25.424819  385211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:43:25.535913  385211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:43:25.552035  385211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:43:25.570081  385211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:43:25.570141  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.581526  385211 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:43:25.581590  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.592157  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.602648  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.613285  385211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:43:25.623297  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.634452  385211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.644745  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.654826  385211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:43:25.663288  385211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:43:25.672545  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:25.774692  385211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:43:26.389931  385211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:43:26.390002  385211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:43:26.395453  385211 start.go:564] Will wait 60s for crictl version
	I1101 10:43:26.395532  385211 ssh_runner.go:195] Run: which crictl
	I1101 10:43:26.400212  385211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:43:26.432448  385211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:43:26.432553  385211 ssh_runner.go:195] Run: crio --version
	I1101 10:43:26.469531  385211 ssh_runner.go:195] Run: crio --version
	I1101 10:43:26.509599  385211 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:43:26.510918  385211 cli_runner.go:164] Run: docker network inspect newest-cni-336923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:43:26.532517  385211 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:43:26.537439  385211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:43:26.551090  385211 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:43:26.552134  385211 kubeadm.go:884] updating cluster {Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:43:26.552309  385211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:26.552371  385211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:43:26.592302  385211 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:43:26.592326  385211 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:43:26.592385  385211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:43:26.623998  385211 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:43:26.624025  385211 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:43:26.624035  385211 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:43:26.624170  385211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-336923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:43:26.624265  385211 ssh_runner.go:195] Run: crio config
	I1101 10:43:26.674400  385211 cni.go:84] Creating CNI manager for ""
	I1101 10:43:26.674422  385211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:26.674440  385211 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:43:26.674462  385211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-336923 NodeName:newest-cni-336923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:43:26.674609  385211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-336923"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:43:26.674672  385211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:43:26.684472  385211 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:43:26.684555  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:43:26.693298  385211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:43:26.708330  385211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:43:26.723417  385211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 10:43:26.738609  385211 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:43:26.743102  385211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:43:26.754490  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:26.860261  385211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:43:26.888382  385211 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923 for IP: 192.168.85.2
	I1101 10:43:26.888407  385211 certs.go:195] generating shared ca certs ...
	I1101 10:43:26.888429  385211 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:26.888637  385211 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:43:26.888701  385211 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:43:26.888718  385211 certs.go:257] generating profile certs ...
	I1101 10:43:26.888850  385211 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/client.key
	I1101 10:43:26.888933  385211 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.key.243c0d0d
	I1101 10:43:26.888995  385211 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.key
	I1101 10:43:26.889152  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:43:26.889197  385211 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:43:26.889212  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:43:26.889244  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:43:26.889284  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:43:26.889316  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:43:26.889372  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:43:26.890238  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:43:26.915760  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:43:26.940726  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:43:26.964835  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:43:26.990573  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:43:27.013067  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:43:27.036519  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:43:27.059066  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:43:27.081056  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:43:27.103980  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:43:27.126512  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:43:27.149504  385211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:43:27.165265  385211 ssh_runner.go:195] Run: openssl version
	I1101 10:43:27.172420  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:43:27.183610  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.188666  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.188723  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.245581  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:43:27.256943  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:43:27.268346  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.273383  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.273441  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.329279  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:43:27.340693  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:43:27.351249  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.355345  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.355403  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.414180  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:43:27.426101  385211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:43:27.430861  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:43:27.486012  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:43:27.563512  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:43:27.622833  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:43:27.682140  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:43:27.737630  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:43:27.793353  385211 kubeadm.go:401] StartCluster: {Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:27.793475  385211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:43:27.793563  385211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:43:27.831673  385211 cri.go:89] found id: ""
	I1101 10:43:27.831737  385211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:43:27.840098  385211 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:43:27.840120  385211 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:43:27.840169  385211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:43:27.847934  385211 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:43:27.848632  385211 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-336923" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:27.848984  385211 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-58021/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-336923" cluster setting kubeconfig missing "newest-cni-336923" context setting]
	I1101 10:43:27.849613  385211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.876052  385211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:43:27.884631  385211 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:43:27.884663  385211 kubeadm.go:602] duration metric: took 44.535917ms to restartPrimaryControlPlane
	I1101 10:43:27.884674  385211 kubeadm.go:403] duration metric: took 91.333695ms to StartCluster
	I1101 10:43:27.884693  385211 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.884762  385211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:27.885777  385211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.921113  385211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:43:27.921231  385211 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:43:27.921378  385211 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-336923"
	I1101 10:43:27.921390  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:27.921404  385211 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-336923"
	W1101 10:43:27.921414  385211 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:43:27.921408  385211 addons.go:70] Setting dashboard=true in profile "newest-cni-336923"
	I1101 10:43:27.921422  385211 addons.go:70] Setting default-storageclass=true in profile "newest-cni-336923"
	I1101 10:43:27.921443  385211 addons.go:239] Setting addon dashboard=true in "newest-cni-336923"
	I1101 10:43:27.921448  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	W1101 10:43:27.921455  385211 addons.go:248] addon dashboard should already be in state true
	I1101 10:43:27.921458  385211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-336923"
	I1101 10:43:27.921512  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:27.921788  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.921874  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.921878  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.922943  385211 out.go:179] * Verifying Kubernetes components...
	I1101 10:43:27.926542  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:27.948055  385211 addons.go:239] Setting addon default-storageclass=true in "newest-cni-336923"
	W1101 10:43:27.948083  385211 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:43:27.948115  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:27.948592  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.949931  385211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:43:27.951582  385211 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:43:27.951591  385211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:27.951717  385211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:43:27.951785  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.952894  385211 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1101 10:43:25.630441  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:28.132297  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:27.954213  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:43:27.954233  385211 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:43:27.954294  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.975475  385211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:27.975514  385211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:43:27.975584  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.976243  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:27.982565  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:28.003386  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:28.069464  385211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:43:28.097817  385211 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:43:28.097875  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:28.097884  385211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:43:28.099383  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:43:28.099403  385211 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:43:28.122313  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:28.122561  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:43:28.122586  385211 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:43:28.149303  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:43:28.149330  385211 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:43:28.174173  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:43:28.174199  385211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:43:28.195857  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:43:28.195884  385211 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1101 10:43:28.201388  385211 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.201435  385211 retry.go:31] will retry after 259.751612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:43:28.218472  385211 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.218525  385211 retry.go:31] will retry after 370.922823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.219522  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:43:28.219550  385211 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:43:28.237475  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:43:28.237512  385211 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:43:28.253898  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:43:28.253926  385211 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:43:28.267526  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:43:28.267552  385211 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:43:28.280403  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:43:28.462006  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:28.590605  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:28.598216  385211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:43:30.341998  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.061546929s)
	I1101 10:43:30.343194  385211 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-336923 addons enable metrics-server
	
	I1101 10:43:30.427213  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.96517409s)
	I1101 10:43:30.427275  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.836635399s)
	I1101 10:43:30.427306  385211 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.829057914s)
	I1101 10:43:30.427330  385211 api_server.go:72] duration metric: took 2.506172151s to wait for apiserver process to appear ...
	I1101 10:43:30.427336  385211 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:43:30.427357  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:30.434031  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:43:30.434053  385211 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:43:30.442027  385211 out.go:179] * Enabled addons: dashboard, storage-provisioner, default-storageclass
	I1101 10:43:30.443021  385211 addons.go:515] duration metric: took 2.521798237s for enable addons: enabled=[dashboard storage-provisioner default-storageclass]
	I1101 10:43:30.928254  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:30.933188  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:43:30.933224  385211 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:43:31.427738  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:31.432031  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:43:31.433126  385211 api_server.go:141] control plane version: v1.34.1
	I1101 10:43:31.433156  385211 api_server.go:131] duration metric: took 1.005812081s to wait for apiserver health ...
	I1101 10:43:31.433168  385211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:43:31.436835  385211 system_pods.go:59] 8 kube-system pods found
	I1101 10:43:31.436864  385211 system_pods.go:61] "coredns-66bc5c9577-j9pcl" [9244c7b5-e2f4-44ec-a7c9-f337e044f46e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:43:31.436872  385211 system_pods.go:61] "etcd-newest-cni-336923" [e4c9b0a5-3bfb-4e36-bc6e-fcfe9945c1f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:31.436882  385211 system_pods.go:61] "kindnet-6lbk4" [e62d231c-e1d5-4e4a-81e1-0be9614e211d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:43:31.436890  385211 system_pods.go:61] "kube-apiserver-newest-cni-336923" [f7c5c26f-4f73-459f-b72a-79f07879ab50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:31.436897  385211 system_pods.go:61] "kube-controller-manager-newest-cni-336923" [4d758565-1733-499f-ad35-853e88c03a13] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:31.436903  385211 system_pods.go:61] "kube-proxy-z65pd" [5a6496ad-eaf7-4f96-af7e-0dd5f88346c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:43:31.436910  385211 system_pods.go:61] "kube-scheduler-newest-cni-336923" [03d3cde4-6638-4fe6-949a-26f05cd8dfac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:31.436915  385211 system_pods.go:61] "storage-provisioner" [7165902e-833a-41e9-84eb-cf31f057f373] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:43:31.436924  385211 system_pods.go:74] duration metric: took 3.751261ms to wait for pod list to return data ...
	I1101 10:43:31.436933  385211 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:43:31.439538  385211 default_sa.go:45] found service account: "default"
	I1101 10:43:31.439560  385211 default_sa.go:55] duration metric: took 2.618436ms for default service account to be created ...
	I1101 10:43:31.439574  385211 kubeadm.go:587] duration metric: took 3.518414216s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:43:31.439596  385211 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:43:31.442059  385211 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:43:31.442085  385211 node_conditions.go:123] node cpu capacity is 8
	I1101 10:43:31.442098  385211 node_conditions.go:105] duration metric: took 2.496441ms to run NodePressure ...
	I1101 10:43:31.442113  385211 start.go:242] waiting for startup goroutines ...
	I1101 10:43:31.442127  385211 start.go:247] waiting for cluster config update ...
	I1101 10:43:31.442144  385211 start.go:256] writing updated cluster config ...
	I1101 10:43:31.442423  385211 ssh_runner.go:195] Run: rm -f paused
	I1101 10:43:31.493548  385211 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:43:31.495480  385211 out.go:179] * Done! kubectl is now configured to use "newest-cni-336923" cluster and "default" namespace by default
	W1101 10:43:30.628520  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:32.629114  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.277727206Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=edc0c2ed-61ed-430f-94a9-ce0c1a77fa9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.279082812Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.279692304Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.280079338Z" level=info msg="Ran pod sandbox 787e133010dd4de748839ca96737ba9c96585f7c3163eb785045d6004d71ec73 with infra container: kube-system/kube-proxy-z65pd/POD" id=a5b8fe23-3f01-4edb-a5c5-c46c4b23adc5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.28056486Z" level=info msg="Ran pod sandbox 477a36edde5a6a6cffdcac72969168c544491bc93057efd4a91771980d7bcc95 with infra container: kube-system/kindnet-6lbk4/POD" id=edc0c2ed-61ed-430f-94a9-ce0c1a77fa9f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.281330627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=780f8477-c061-42a5-a0e3-12c00bdb8d83 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:30 newest-cni-336923 crio[517]: time="2025-11-01T10:43:30.281838687Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e2862f67-312f-48e0-b2f2-eeb572bab09f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.025310658Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9b032264-827d-4332-849a-ab46bc413a7c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.026392835Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=1bbeceb2-0eed-49f2-86d3-bc2c17ece443 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.027581695Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=076da46c-7f81-4ba1-9f92-81c09c2497ef name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.028669903Z" level=info msg="Creating container: kube-system/kindnet-6lbk4/kindnet-cni" id=f7a7002e-edf7-4c44-a726-c715fab1fe73 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.028713078Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2e39006e-bff5-43f1-b03f-3ee7a469b0da name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.028787957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.03047814Z" level=info msg="Creating container: kube-system/kube-proxy-z65pd/kube-proxy" id=038a7908-63de-41fb-b809-34ac291fd30c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.030626383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.035128828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.035750449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.037732076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.03832609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.066595902Z" level=info msg="Created container a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901: kube-system/kindnet-6lbk4/kindnet-cni" id=f7a7002e-edf7-4c44-a726-c715fab1fe73 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.067344863Z" level=info msg="Starting container: a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901" id=b418237a-8677-4646-863f-5ec6aeaddc88 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.069605106Z" level=info msg="Started container" PID=1054 containerID=a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901 description=kube-system/kindnet-6lbk4/kindnet-cni id=b418237a-8677-4646-863f-5ec6aeaddc88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=477a36edde5a6a6cffdcac72969168c544491bc93057efd4a91771980d7bcc95
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.071636817Z" level=info msg="Created container 8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a: kube-system/kube-proxy-z65pd/kube-proxy" id=038a7908-63de-41fb-b809-34ac291fd30c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.072283384Z" level=info msg="Starting container: 8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a" id=56648dda-dc53-4bc9-af14-1be87cca2c26 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:31 newest-cni-336923 crio[517]: time="2025-11-01T10:43:31.075265637Z" level=info msg="Started container" PID=1055 containerID=8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a description=kube-system/kube-proxy-z65pd/kube-proxy id=56648dda-dc53-4bc9-af14-1be87cca2c26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=787e133010dd4de748839ca96737ba9c96585f7c3163eb785045d6004d71ec73
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8b2f249be57c0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   787e133010dd4       kube-proxy-z65pd                            kube-system
	a6d88d23ae3a6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   477a36edde5a6       kindnet-6lbk4                               kube-system
	05876bdf52963       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   b3c1d6a861140       kube-apiserver-newest-cni-336923            kube-system
	96ea6466b389e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   d89a0b0caf2be       etcd-newest-cni-336923                      kube-system
	303e5cbe1c98e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   9e3222029cb57       kube-controller-manager-newest-cni-336923   kube-system
	b6e317eecde60       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   f64d5e94e37da       kube-scheduler-newest-cni-336923            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-336923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-336923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=newest-cni-336923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_43_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:43:00 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-336923
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:43:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:43:30 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:43:30 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:43:30 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 10:43:30 +0000   Sat, 01 Nov 2025 10:42:58 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-336923
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6f793bc2-07ee-4607-b191-dc232242ea47
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-336923                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-6lbk4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-336923             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-336923    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-z65pd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-336923             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node newest-cni-336923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node newest-cni-336923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node newest-cni-336923 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-336923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node newest-cni-336923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     34s                kubelet          Node newest-cni-336923 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           30s                node-controller  Node newest-cni-336923 event: Registered Node newest-cni-336923 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 10s)   kubelet          Node newest-cni-336923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 10s)   kubelet          Node newest-cni-336923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 10s)   kubelet          Node newest-cni-336923 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-336923 event: Registered Node newest-cni-336923 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	[Nov 1 10:42] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [96ea6466b389e2e86c6d49b93414ad564bab4d3aff97667a22ec3b36e4aa6693] <==
	{"level":"warn","ts":"2025-11-01T10:43:29.266887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.274124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.281395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.289773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.296558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.303433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.309761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.315563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.321435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.328085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.334367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.340636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.347391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.353438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.360168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.366713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.372612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.378912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.393000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.399054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.406914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.428737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.435340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.442296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:29.484570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:43:36 up  2:25,  0 user,  load average: 4.49, 3.99, 2.63
	Linux newest-cni-336923 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a6d88d23ae3a6133074c0336e7b2c0423a83ff6593732aa8f95697cdc8d67901] <==
	I1101 10:43:31.257591       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:43:31.257841       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 10:43:31.257989       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:43:31.258010       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:43:31.258020       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:43:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:43:31.459382       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:43:31.459410       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:43:31.459422       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:43:31.460327       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [05876bdf52963039d74359b1c6e86efb9d3a0b4785c1d0b737d8e32f606c95b6] <==
	I1101 10:43:29.964978       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:43:29.964762       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:43:29.964747       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 10:43:29.966688       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:43:29.970646       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 10:43:29.970944       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:43:29.977941       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:43:29.977970       1 policy_source.go:240] refreshing policies
	I1101 10:43:29.984651       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:43:29.984692       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:43:29.984700       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:43:29.984707       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:43:29.984713       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:43:30.015320       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:43:30.018262       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:43:30.235991       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:43:30.261598       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:43:30.284809       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:43:30.291533       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:43:30.326602       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.99.157"}
	I1101 10:43:30.337155       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.83.130"}
	I1101 10:43:30.866334       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:43:33.369919       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:43:33.712904       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:43:33.762640       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [303e5cbe1c98eba4a68058ea2e32a00cf43f2a9a95f999a8979fc1b9c1e2d5ed] <==
	I1101 10:43:33.335786       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:43:33.338099       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:43:33.344332       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:43:33.345488       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:43:33.345564       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:43:33.345673       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:43:33.348819       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:43:33.359560       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:43:33.360371       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:43:33.360405       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:43:33.360423       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:43:33.360463       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:43:33.360520       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:43:33.360410       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:43:33.360745       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:43:33.363560       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:43:33.364878       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:43:33.364958       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:43:33.367162       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:43:33.368546       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:43:33.370825       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:43:33.371944       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:43:33.374064       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:43:33.376000       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:43:33.382090       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8b2f249be57c07c0fa1ccb42aff365050a82c031fceea126b5fcf5b78e77eb6a] <==
	I1101 10:43:31.108195       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:43:31.185129       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:43:31.285787       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:43:31.285832       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 10:43:31.285930       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:43:31.303205       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:43:31.303263       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:43:31.308028       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:43:31.308441       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:43:31.308477       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:31.310726       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:43:31.310769       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:43:31.310830       1 config.go:200] "Starting service config controller"
	I1101 10:43:31.310858       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:43:31.310830       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:43:31.310895       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:43:31.310947       1 config.go:309] "Starting node config controller"
	I1101 10:43:31.310969       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:43:31.310977       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:43:31.411955       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:43:31.412006       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:43:31.411998       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b6e317eecde60778bc7ea3d748bfc59c8cc1f778c663d1b00a08818d50a539f2] <==
	I1101 10:43:28.883789       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:43:29.906091       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:43:29.906133       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:43:29.906146       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:43:29.906155       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:43:29.932806       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:43:29.932904       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:29.935781       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:29.935885       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:29.936171       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:43:29.936206       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:43:30.037633       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013101     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a6496ad-eaf7-4f96-af7e-0dd5f88346c3-lib-modules\") pod \"kube-proxy-z65pd\" (UID: \"5a6496ad-eaf7-4f96-af7e-0dd5f88346c3\") " pod="kube-system/kube-proxy-z65pd"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013159     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e62d231c-e1d5-4e4a-81e1-0be9614e211d-cni-cfg\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013193     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e62d231c-e1d5-4e4a-81e1-0be9614e211d-lib-modules\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013325     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a6496ad-eaf7-4f96-af7e-0dd5f88346c3-xtables-lock\") pod \"kube-proxy-z65pd\" (UID: \"5a6496ad-eaf7-4f96-af7e-0dd5f88346c3\") " pod="kube-system/kube-proxy-z65pd"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.013680     662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e62d231c-e1d5-4e4a-81e1-0be9614e211d-xtables-lock\") pod \"kindnet-6lbk4\" (UID: \"e62d231c-e1d5-4e4a-81e1-0be9614e211d\") " pod="kube-system/kindnet-6lbk4"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.015109     662 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.015212     662 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.015243     662 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.016752     662 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.021751     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.021876     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.022130     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: I1101 10:43:30.022204     662 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.034779     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-336923\" already exists" pod="kube-system/kube-controller-manager-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.034808     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-336923\" already exists" pod="kube-system/kube-scheduler-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.034884     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-336923\" already exists" pod="kube-system/kube-apiserver-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.034808     662 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-336923\" already exists" pod="kube-system/etcd-newest-cni-336923"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.281937     662 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-proxy start failed in pod kube-proxy-z65pd_kube-system(5a6496ad-eaf7-4f96-af7e-0dd5f88346c3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.281998     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-z65pd" podUID="5a6496ad-eaf7-4f96-af7e-0dd5f88346c3"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.282344     662 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-6lbk4_kube-system(e62d231c-e1d5-4e4a-81e1-0be9614e211d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Nov 01 10:43:30 newest-cni-336923 kubelet[662]: E1101 10:43:30.283519     662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kindnet-6lbk4" podUID="e62d231c-e1d5-4e4a-81e1-0be9614e211d"
	Nov 01 10:43:32 newest-cni-336923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:43:32 newest-cni-336923 kubelet[662]: I1101 10:43:32.452312     662 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 10:43:32 newest-cni-336923 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:43:32 newest-cni-336923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-336923 -n newest-cni-336923
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-336923 -n newest-cni-336923: exit status 2 (318.176548ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-336923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-j9pcl storage-provisioner dashboard-metrics-scraper-6ffb444bf9-f454l kubernetes-dashboard-855c9754f9-qdhnj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-336923 describe pod coredns-66bc5c9577-j9pcl storage-provisioner dashboard-metrics-scraper-6ffb444bf9-f454l kubernetes-dashboard-855c9754f9-qdhnj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-336923 describe pod coredns-66bc5c9577-j9pcl storage-provisioner dashboard-metrics-scraper-6ffb444bf9-f454l kubernetes-dashboard-855c9754f9-qdhnj: exit status 1 (60.959908ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-j9pcl" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-f454l" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-qdhnj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-336923 describe pod coredns-66bc5c9577-j9pcl storage-provisioner dashboard-metrics-scraper-6ffb444bf9-f454l kubernetes-dashboard-855c9754f9-qdhnj: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-433711 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-433711 --alsologtostderr -v=1: exit status 80 (2.426068246s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-433711 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:44:09.773345  389951 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:44:09.773622  389951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:44:09.773633  389951 out.go:374] Setting ErrFile to fd 2...
	I1101 10:44:09.773637  389951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:44:09.773866  389951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:44:09.774084  389951 out.go:368] Setting JSON to false
	I1101 10:44:09.774105  389951 mustload.go:66] Loading cluster: default-k8s-diff-port-433711
	I1101 10:44:09.774442  389951 config.go:182] Loaded profile config "default-k8s-diff-port-433711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:44:09.774840  389951 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-433711 --format={{.State.Status}}
	I1101 10:44:09.791533  389951 host.go:66] Checking if "default-k8s-diff-port-433711" exists ...
	I1101 10:44:09.791749  389951 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:44:09.850146  389951 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-01 10:44:09.839409091 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:44:09.850727  389951 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-433711 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 10:44:09.852720  389951 out.go:179] * Pausing node default-k8s-diff-port-433711 ... 
	I1101 10:44:09.853860  389951 host.go:66] Checking if "default-k8s-diff-port-433711" exists ...
	I1101 10:44:09.854098  389951 ssh_runner.go:195] Run: systemctl --version
	I1101 10:44:09.854134  389951 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-433711
	I1101 10:44:09.870427  389951 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/default-k8s-diff-port-433711/id_rsa Username:docker}
	I1101 10:44:09.968321  389951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:44:09.980400  389951 pause.go:52] kubelet running: true
	I1101 10:44:09.980475  389951 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:44:10.135340  389951 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:44:10.135443  389951 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:44:10.201317  389951 cri.go:89] found id: "fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f"
	I1101 10:44:10.201339  389951 cri.go:89] found id: "f480d182aec70d74af5c26d6b5649b3d392235fd29b5f2b0e869a42a8aab1142"
	I1101 10:44:10.201342  389951 cri.go:89] found id: "8bacee2ea78c4e9dab8336d76dde4c00d1dd3fae53ccb6cf5794cb16154e9f2d"
	I1101 10:44:10.201351  389951 cri.go:89] found id: "56f78b23eff04b03a0c09efc93f2ddd6f650e3db549d5fbe24b8463049729188"
	I1101 10:44:10.201354  389951 cri.go:89] found id: "268037ed9250984f4892d07ede3dc1caa15abd0d0ee1e13d165836a8f5d56237"
	I1101 10:44:10.201357  389951 cri.go:89] found id: "a47e2aa79fa21a30c460b676774cdb84b1d8dccc92e263a4ff967b3e351c7284"
	I1101 10:44:10.201359  389951 cri.go:89] found id: "ee4bc7ee409435014537fb2e187082556b0eb41b0a940a43ed6a16f657936a76"
	I1101 10:44:10.201362  389951 cri.go:89] found id: "ba242b116eb3920e232ebbe1eb907d675c7b4d49cc536f95a10a281b6e468a77"
	I1101 10:44:10.201365  389951 cri.go:89] found id: "8083c17bef04a52cbc3835ee9f8f046af5ef91f84b3497be4940886ec319826a"
	I1101 10:44:10.201370  389951 cri.go:89] found id: "ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	I1101 10:44:10.201373  389951 cri.go:89] found id: "2627f9ff573b62e65714ef8ce20547c7a9346b10aa430a1b15470b7601f6ba12"
	I1101 10:44:10.201376  389951 cri.go:89] found id: ""
	I1101 10:44:10.201421  389951 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:44:10.213158  389951 retry.go:31] will retry after 296.852256ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:44:10Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:44:10.510768  389951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:44:10.523610  389951 pause.go:52] kubelet running: false
	I1101 10:44:10.523669  389951 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:44:10.659285  389951 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:44:10.659374  389951 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:44:10.724849  389951 cri.go:89] found id: "fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f"
	I1101 10:44:10.724876  389951 cri.go:89] found id: "f480d182aec70d74af5c26d6b5649b3d392235fd29b5f2b0e869a42a8aab1142"
	I1101 10:44:10.724883  389951 cri.go:89] found id: "8bacee2ea78c4e9dab8336d76dde4c00d1dd3fae53ccb6cf5794cb16154e9f2d"
	I1101 10:44:10.724888  389951 cri.go:89] found id: "56f78b23eff04b03a0c09efc93f2ddd6f650e3db549d5fbe24b8463049729188"
	I1101 10:44:10.724892  389951 cri.go:89] found id: "268037ed9250984f4892d07ede3dc1caa15abd0d0ee1e13d165836a8f5d56237"
	I1101 10:44:10.724897  389951 cri.go:89] found id: "a47e2aa79fa21a30c460b676774cdb84b1d8dccc92e263a4ff967b3e351c7284"
	I1101 10:44:10.724901  389951 cri.go:89] found id: "ee4bc7ee409435014537fb2e187082556b0eb41b0a940a43ed6a16f657936a76"
	I1101 10:44:10.724905  389951 cri.go:89] found id: "ba242b116eb3920e232ebbe1eb907d675c7b4d49cc536f95a10a281b6e468a77"
	I1101 10:44:10.724908  389951 cri.go:89] found id: "8083c17bef04a52cbc3835ee9f8f046af5ef91f84b3497be4940886ec319826a"
	I1101 10:44:10.724914  389951 cri.go:89] found id: "ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	I1101 10:44:10.724916  389951 cri.go:89] found id: "2627f9ff573b62e65714ef8ce20547c7a9346b10aa430a1b15470b7601f6ba12"
	I1101 10:44:10.724920  389951 cri.go:89] found id: ""
	I1101 10:44:10.724966  389951 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:44:10.736716  389951 retry.go:31] will retry after 385.173251ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:44:10Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:44:11.122267  389951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:44:11.135295  389951 pause.go:52] kubelet running: false
	I1101 10:44:11.135355  389951 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:44:11.266452  389951 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:44:11.266572  389951 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:44:11.330206  389951 cri.go:89] found id: "fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f"
	I1101 10:44:11.330231  389951 cri.go:89] found id: "f480d182aec70d74af5c26d6b5649b3d392235fd29b5f2b0e869a42a8aab1142"
	I1101 10:44:11.330237  389951 cri.go:89] found id: "8bacee2ea78c4e9dab8336d76dde4c00d1dd3fae53ccb6cf5794cb16154e9f2d"
	I1101 10:44:11.330245  389951 cri.go:89] found id: "56f78b23eff04b03a0c09efc93f2ddd6f650e3db549d5fbe24b8463049729188"
	I1101 10:44:11.330249  389951 cri.go:89] found id: "268037ed9250984f4892d07ede3dc1caa15abd0d0ee1e13d165836a8f5d56237"
	I1101 10:44:11.330264  389951 cri.go:89] found id: "a47e2aa79fa21a30c460b676774cdb84b1d8dccc92e263a4ff967b3e351c7284"
	I1101 10:44:11.330268  389951 cri.go:89] found id: "ee4bc7ee409435014537fb2e187082556b0eb41b0a940a43ed6a16f657936a76"
	I1101 10:44:11.330273  389951 cri.go:89] found id: "ba242b116eb3920e232ebbe1eb907d675c7b4d49cc536f95a10a281b6e468a77"
	I1101 10:44:11.330275  389951 cri.go:89] found id: "8083c17bef04a52cbc3835ee9f8f046af5ef91f84b3497be4940886ec319826a"
	I1101 10:44:11.330283  389951 cri.go:89] found id: "ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	I1101 10:44:11.330291  389951 cri.go:89] found id: "2627f9ff573b62e65714ef8ce20547c7a9346b10aa430a1b15470b7601f6ba12"
	I1101 10:44:11.330294  389951 cri.go:89] found id: ""
	I1101 10:44:11.330336  389951 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:44:11.341460  389951 retry.go:31] will retry after 563.197082ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:44:11Z" level=error msg="open /run/runc: no such file or directory"
	I1101 10:44:11.905226  389951 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:44:11.918014  389951 pause.go:52] kubelet running: false
	I1101 10:44:11.918073  389951 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 10:44:12.050369  389951 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 10:44:12.050524  389951 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 10:44:12.117729  389951 cri.go:89] found id: "fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f"
	I1101 10:44:12.117750  389951 cri.go:89] found id: "f480d182aec70d74af5c26d6b5649b3d392235fd29b5f2b0e869a42a8aab1142"
	I1101 10:44:12.117756  389951 cri.go:89] found id: "8bacee2ea78c4e9dab8336d76dde4c00d1dd3fae53ccb6cf5794cb16154e9f2d"
	I1101 10:44:12.117760  389951 cri.go:89] found id: "56f78b23eff04b03a0c09efc93f2ddd6f650e3db549d5fbe24b8463049729188"
	I1101 10:44:12.117764  389951 cri.go:89] found id: "268037ed9250984f4892d07ede3dc1caa15abd0d0ee1e13d165836a8f5d56237"
	I1101 10:44:12.117769  389951 cri.go:89] found id: "a47e2aa79fa21a30c460b676774cdb84b1d8dccc92e263a4ff967b3e351c7284"
	I1101 10:44:12.117772  389951 cri.go:89] found id: "ee4bc7ee409435014537fb2e187082556b0eb41b0a940a43ed6a16f657936a76"
	I1101 10:44:12.117776  389951 cri.go:89] found id: "ba242b116eb3920e232ebbe1eb907d675c7b4d49cc536f95a10a281b6e468a77"
	I1101 10:44:12.117779  389951 cri.go:89] found id: "8083c17bef04a52cbc3835ee9f8f046af5ef91f84b3497be4940886ec319826a"
	I1101 10:44:12.117792  389951 cri.go:89] found id: "ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	I1101 10:44:12.117796  389951 cri.go:89] found id: "2627f9ff573b62e65714ef8ce20547c7a9346b10aa430a1b15470b7601f6ba12"
	I1101 10:44:12.117800  389951 cri.go:89] found id: ""
	I1101 10:44:12.117850  389951 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 10:44:12.131730  389951 out.go:203] 
	W1101 10:44:12.132842  389951 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:44:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:44:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 10:44:12.132856  389951 out.go:285] * 
	* 
	W1101 10:44:12.137403  389951 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:44:12.138615  389951 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-433711 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-433711
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-433711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2",
	        "Created": "2025-11-01T10:41:33.057243261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 380501,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:43:09.687575431Z",
	            "FinishedAt": "2025-11-01T10:43:08.800507891Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2-json.log",
	        "Name": "/default-k8s-diff-port-433711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-433711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-433711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2",
	                "LowerDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-433711",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-433711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-433711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-433711",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-433711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3481fd5ad4e3d37befbf317d24bb644bf33da0800ae850dbeaded24ffb9ca37a",
	            "SandboxKey": "/var/run/docker/netns/3481fd5ad4e3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-433711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:82:9f:23:67:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0395ef9fed2dfe179301e2f7acf97030a23523642ea4cc41f18d2b39a90a95e0",
	                    "EndpointID": "a72c3a8a190dcfa5cf275eb5a65d366de200e39841b6014c17e5aae3e7cf9c15",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-433711",
	                        "b9f86e35d4b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711: exit status 2 (320.783305ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-433711 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-433711 logs -n 25: (1.03606151s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433711 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-433711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ image   │ embed-certs-071527 image list --format=json                                                                                                                                                                                                   │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p embed-certs-071527 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ stop    │ -p newest-cni-336923 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p embed-certs-071527                                                                                                                                                                                                                         │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-336923 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p embed-certs-071527                                                                                                                                                                                                                         │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ image   │ newest-cni-336923 image list --format=json                                                                                                                                                                                                    │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p newest-cni-336923 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ delete  │ -p newest-cni-336923                                                                                                                                                                                                                          │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p newest-cni-336923                                                                                                                                                                                                                          │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ image   │ default-k8s-diff-port-433711 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ pause   │ -p default-k8s-diff-port-433711 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:43:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:43:19.562995  385211 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:19.563161  385211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:19.563172  385211 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:19.563179  385211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:19.563441  385211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:43:19.563922  385211 out.go:368] Setting JSON to false
	I1101 10:43:19.565279  385211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8739,"bootTime":1761985060,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:43:19.565370  385211 start.go:143] virtualization: kvm guest
	I1101 10:43:19.567110  385211 out.go:179] * [newest-cni-336923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:43:19.568689  385211 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:43:19.568720  385211 notify.go:221] Checking for updates...
	I1101 10:43:19.570960  385211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:43:19.572305  385211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:19.573417  385211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:43:19.574730  385211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:43:19.576048  385211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:43:19.577590  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:19.578285  385211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:43:19.605771  385211 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:43:19.605883  385211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:19.665006  385211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:43:19.654595853 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:19.665205  385211 docker.go:319] overlay module found
	I1101 10:43:19.666673  385211 out.go:179] * Using the docker driver based on existing profile
	I1101 10:43:19.667653  385211 start.go:309] selected driver: docker
	I1101 10:43:19.667667  385211 start.go:930] validating driver "docker" against &{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:19.667749  385211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:43:19.668238  385211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:19.729845  385211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:43:19.718798371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:19.730108  385211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:43:19.730135  385211 cni.go:84] Creating CNI manager for ""
	I1101 10:43:19.730186  385211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:19.730221  385211 start.go:353] cluster config:
	{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:19.731861  385211 out.go:179] * Starting "newest-cni-336923" primary control-plane node in "newest-cni-336923" cluster
	I1101 10:43:19.732887  385211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:43:19.733870  385211 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:43:19.734910  385211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:19.734977  385211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:43:19.734992  385211 cache.go:59] Caching tarball of preloaded images
	I1101 10:43:19.735037  385211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:43:19.735072  385211 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:43:19.735085  385211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:43:19.735216  385211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json ...
	I1101 10:43:19.759288  385211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:43:19.759307  385211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:43:19.759322  385211 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:43:19.759351  385211 start.go:360] acquireMachinesLock for newest-cni-336923: {Name:mk078b1ded54eaee8a26288c21e4405f07864b1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:43:19.759448  385211 start.go:364] duration metric: took 51.416µs to acquireMachinesLock for "newest-cni-336923"
	I1101 10:43:19.759473  385211 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:43:19.759483  385211 fix.go:54] fixHost starting: 
	I1101 10:43:19.759794  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:19.777831  385211 fix.go:112] recreateIfNeeded on newest-cni-336923: state=Stopped err=<nil>
	W1101 10:43:19.777879  385211 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:43:19.580383  380170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:43:19.585617  380170 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:43:19.586638  380170 api_server.go:141] control plane version: v1.34.1
	I1101 10:43:19.586665  380170 api_server.go:131] duration metric: took 506.95003ms to wait for apiserver health ...
	I1101 10:43:19.586676  380170 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:43:19.590035  380170 system_pods.go:59] 8 kube-system pods found
	I1101 10:43:19.590088  380170 system_pods.go:61] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:19.590104  380170 system_pods.go:61] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:19.590111  380170 system_pods.go:61] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:43:19.590119  380170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:19.590131  380170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:19.590137  380170 system_pods.go:61] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:43:19.590144  380170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:19.590149  380170 system_pods.go:61] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Running
	I1101 10:43:19.590164  380170 system_pods.go:74] duration metric: took 3.480437ms to wait for pod list to return data ...
	I1101 10:43:19.590176  380170 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:43:19.592779  380170 default_sa.go:45] found service account: "default"
	I1101 10:43:19.592800  380170 default_sa.go:55] duration metric: took 2.617606ms for default service account to be created ...
	I1101 10:43:19.592810  380170 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:43:19.595723  380170 system_pods.go:86] 8 kube-system pods found
	I1101 10:43:19.595754  380170 system_pods.go:89] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:19.595765  380170 system_pods.go:89] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:19.595781  380170 system_pods.go:89] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:43:19.595789  380170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:19.595799  380170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:19.595805  380170 system_pods.go:89] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:43:19.595813  380170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:19.595818  380170 system_pods.go:89] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Running
	I1101 10:43:19.595829  380170 system_pods.go:126] duration metric: took 3.011558ms to wait for k8s-apps to be running ...
	I1101 10:43:19.595837  380170 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:43:19.595885  380170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:19.612808  380170 system_svc.go:56] duration metric: took 16.960672ms WaitForService to wait for kubelet
	I1101 10:43:19.612844  380170 kubeadm.go:587] duration metric: took 2.485063342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:43:19.612867  380170 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:43:19.616298  380170 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:43:19.616333  380170 node_conditions.go:123] node cpu capacity is 8
	I1101 10:43:19.616349  380170 node_conditions.go:105] duration metric: took 3.477231ms to run NodePressure ...
	I1101 10:43:19.616364  380170 start.go:242] waiting for startup goroutines ...
	I1101 10:43:19.616379  380170 start.go:247] waiting for cluster config update ...
	I1101 10:43:19.616401  380170 start.go:256] writing updated cluster config ...
	I1101 10:43:19.616752  380170 ssh_runner.go:195] Run: rm -f paused
	I1101 10:43:19.620456  380170 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:43:19.623291  380170 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v7tvt" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:43:21.628140  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:23.630013  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:19.779427  385211 out.go:252] * Restarting existing docker container for "newest-cni-336923" ...
	I1101 10:43:19.779489  385211 cli_runner.go:164] Run: docker start newest-cni-336923
	I1101 10:43:20.014386  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:20.033355  385211 kic.go:430] container "newest-cni-336923" state is running.
	I1101 10:43:20.033776  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:20.051719  385211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json ...
	I1101 10:43:20.051923  385211 machine.go:94] provisionDockerMachine start ...
	I1101 10:43:20.051985  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:20.069646  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:20.069891  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:20.069906  385211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:43:20.070476  385211 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58296->127.0.0.1:33133: read: connection reset by peer
	I1101 10:43:23.216448  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-336923
	
	I1101 10:43:23.216483  385211 ubuntu.go:182] provisioning hostname "newest-cni-336923"
	I1101 10:43:23.216574  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:23.239604  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:23.240021  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:23.240050  385211 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-336923 && echo "newest-cni-336923" | sudo tee /etc/hostname
	I1101 10:43:23.406412  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-336923
	
	I1101 10:43:23.406490  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:23.430458  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:23.430817  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:23.430849  385211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-336923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-336923/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-336923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:43:23.584527  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:43:23.584561  385211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:43:23.584586  385211 ubuntu.go:190] setting up certificates
	I1101 10:43:23.584599  385211 provision.go:84] configureAuth start
	I1101 10:43:23.584671  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:23.606864  385211 provision.go:143] copyHostCerts
	I1101 10:43:23.606939  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:43:23.606959  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:43:23.607044  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:43:23.607184  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:43:23.607198  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:43:23.607244  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:43:23.607352  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:43:23.607365  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:43:23.607400  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:43:23.607554  385211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.newest-cni-336923 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-336923]
	I1101 10:43:24.105760  385211 provision.go:177] copyRemoteCerts
	I1101 10:43:24.105843  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:43:24.105901  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.123234  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.223265  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:43:24.240358  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:43:24.257152  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:43:24.273645  385211 provision.go:87] duration metric: took 689.027992ms to configureAuth
	I1101 10:43:24.273673  385211 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:43:24.273876  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:24.274012  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.291114  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:24.291345  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:24.291367  385211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:43:24.560882  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:43:24.560915  385211 machine.go:97] duration metric: took 4.508974654s to provisionDockerMachine
	I1101 10:43:24.560932  385211 start.go:293] postStartSetup for "newest-cni-336923" (driver="docker")
	I1101 10:43:24.560965  385211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:43:24.561042  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:43:24.561104  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.581756  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.682079  385211 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:43:24.685513  385211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:43:24.685538  385211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:43:24.685552  385211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:43:24.685593  385211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:43:24.685674  385211 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:43:24.685761  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:43:24.693293  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:43:24.710868  385211 start.go:296] duration metric: took 149.921905ms for postStartSetup
	I1101 10:43:24.710959  385211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:43:24.711009  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.727702  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.823431  385211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:43:24.828000  385211 fix.go:56] duration metric: took 5.068504403s for fixHost
	I1101 10:43:24.828024  385211 start.go:83] releasing machines lock for "newest-cni-336923", held for 5.068561902s
	I1101 10:43:24.828091  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:24.845157  385211 ssh_runner.go:195] Run: cat /version.json
	I1101 10:43:24.845213  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.845273  385211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:43:24.845342  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.863014  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.863284  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:25.013866  385211 ssh_runner.go:195] Run: systemctl --version
	I1101 10:43:25.020582  385211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:43:25.057023  385211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:43:25.062007  385211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:43:25.062060  385211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:43:25.070026  385211 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:43:25.070050  385211 start.go:496] detecting cgroup driver to use...
	I1101 10:43:25.070082  385211 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:43:25.070139  385211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:43:25.085382  385211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:43:25.098030  385211 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:43:25.098075  385211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:43:25.111846  385211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:43:25.123714  385211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:43:25.203249  385211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:43:25.286193  385211 docker.go:234] disabling docker service ...
	I1101 10:43:25.286274  385211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:43:25.300278  385211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:43:25.312521  385211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:43:25.424819  385211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:43:25.535913  385211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:43:25.552035  385211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:43:25.570081  385211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:43:25.570141  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.581526  385211 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:43:25.581590  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.592157  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.602648  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.613285  385211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:43:25.623297  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.634452  385211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.644745  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.654826  385211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:43:25.663288  385211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:43:25.672545  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:25.774692  385211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:43:26.389931  385211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:43:26.390002  385211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:43:26.395453  385211 start.go:564] Will wait 60s for crictl version
	I1101 10:43:26.395532  385211 ssh_runner.go:195] Run: which crictl
	I1101 10:43:26.400212  385211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:43:26.432448  385211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:43:26.432553  385211 ssh_runner.go:195] Run: crio --version
	I1101 10:43:26.469531  385211 ssh_runner.go:195] Run: crio --version
	I1101 10:43:26.509599  385211 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:43:26.510918  385211 cli_runner.go:164] Run: docker network inspect newest-cni-336923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:43:26.532517  385211 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:43:26.537439  385211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:43:26.551090  385211 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:43:26.552134  385211 kubeadm.go:884] updating cluster {Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:43:26.552309  385211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:26.552371  385211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:43:26.592302  385211 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:43:26.592326  385211 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:43:26.592385  385211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:43:26.623998  385211 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:43:26.624025  385211 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:43:26.624035  385211 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:43:26.624170  385211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-336923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:43:26.624265  385211 ssh_runner.go:195] Run: crio config
	I1101 10:43:26.674400  385211 cni.go:84] Creating CNI manager for ""
	I1101 10:43:26.674422  385211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:26.674440  385211 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:43:26.674462  385211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-336923 NodeName:newest-cni-336923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:43:26.674609  385211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-336923"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:43:26.674672  385211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:43:26.684472  385211 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:43:26.684555  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:43:26.693298  385211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:43:26.708330  385211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:43:26.723417  385211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 10:43:26.738609  385211 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:43:26.743102  385211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:43:26.754490  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:26.860261  385211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:43:26.888382  385211 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923 for IP: 192.168.85.2
	I1101 10:43:26.888407  385211 certs.go:195] generating shared ca certs ...
	I1101 10:43:26.888429  385211 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:26.888637  385211 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:43:26.888701  385211 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:43:26.888718  385211 certs.go:257] generating profile certs ...
	I1101 10:43:26.888850  385211 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/client.key
	I1101 10:43:26.888933  385211 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.key.243c0d0d
	I1101 10:43:26.888995  385211 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.key
	I1101 10:43:26.889152  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:43:26.889197  385211 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:43:26.889212  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:43:26.889244  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:43:26.889284  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:43:26.889316  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:43:26.889372  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:43:26.890238  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:43:26.915760  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:43:26.940726  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:43:26.964835  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:43:26.990573  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:43:27.013067  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:43:27.036519  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:43:27.059066  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:43:27.081056  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:43:27.103980  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:43:27.126512  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:43:27.149504  385211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:43:27.165265  385211 ssh_runner.go:195] Run: openssl version
	I1101 10:43:27.172420  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:43:27.183610  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.188666  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.188723  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.245581  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:43:27.256943  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:43:27.268346  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.273383  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.273441  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.329279  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:43:27.340693  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:43:27.351249  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.355345  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.355403  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.414180  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:43:27.426101  385211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:43:27.430861  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:43:27.486012  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:43:27.563512  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:43:27.622833  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:43:27.682140  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:43:27.737630  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:43:27.793353  385211 kubeadm.go:401] StartCluster: {Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:27.793475  385211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:43:27.793563  385211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:43:27.831673  385211 cri.go:89] found id: ""
	I1101 10:43:27.831737  385211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:43:27.840098  385211 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:43:27.840120  385211 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:43:27.840169  385211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:43:27.847934  385211 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:43:27.848632  385211 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-336923" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:27.848984  385211 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-58021/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-336923" cluster setting kubeconfig missing "newest-cni-336923" context setting]
	I1101 10:43:27.849613  385211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.876052  385211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:43:27.884631  385211 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:43:27.884663  385211 kubeadm.go:602] duration metric: took 44.535917ms to restartPrimaryControlPlane
	I1101 10:43:27.884674  385211 kubeadm.go:403] duration metric: took 91.333695ms to StartCluster
	I1101 10:43:27.884693  385211 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.884762  385211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:27.885777  385211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.921113  385211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:43:27.921231  385211 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:43:27.921378  385211 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-336923"
	I1101 10:43:27.921390  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:27.921404  385211 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-336923"
	W1101 10:43:27.921414  385211 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:43:27.921408  385211 addons.go:70] Setting dashboard=true in profile "newest-cni-336923"
	I1101 10:43:27.921422  385211 addons.go:70] Setting default-storageclass=true in profile "newest-cni-336923"
	I1101 10:43:27.921443  385211 addons.go:239] Setting addon dashboard=true in "newest-cni-336923"
	I1101 10:43:27.921448  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	W1101 10:43:27.921455  385211 addons.go:248] addon dashboard should already be in state true
	I1101 10:43:27.921458  385211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-336923"
	I1101 10:43:27.921512  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:27.921788  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.921874  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.921878  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.922943  385211 out.go:179] * Verifying Kubernetes components...
	I1101 10:43:27.926542  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:27.948055  385211 addons.go:239] Setting addon default-storageclass=true in "newest-cni-336923"
	W1101 10:43:27.948083  385211 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:43:27.948115  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:27.948592  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.949931  385211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:43:27.951582  385211 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:43:27.951591  385211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:27.951717  385211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:43:27.951785  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.952894  385211 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1101 10:43:25.630441  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:28.132297  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:27.954213  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:43:27.954233  385211 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:43:27.954294  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.975475  385211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:27.975514  385211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:43:27.975584  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.976243  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:27.982565  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:28.003386  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:28.069464  385211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:43:28.097817  385211 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:43:28.097875  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:28.097884  385211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:43:28.099383  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:43:28.099403  385211 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:43:28.122313  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:28.122561  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:43:28.122586  385211 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:43:28.149303  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:43:28.149330  385211 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:43:28.174173  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:43:28.174199  385211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:43:28.195857  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:43:28.195884  385211 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1101 10:43:28.201388  385211 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.201435  385211 retry.go:31] will retry after 259.751612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:43:28.218472  385211 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.218525  385211 retry.go:31] will retry after 370.922823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.219522  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:43:28.219550  385211 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:43:28.237475  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:43:28.237512  385211 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:43:28.253898  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:43:28.253926  385211 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:43:28.267526  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:43:28.267552  385211 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:43:28.280403  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:43:28.462006  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:28.590605  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:28.598216  385211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:43:30.341998  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.061546929s)
	I1101 10:43:30.343194  385211 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-336923 addons enable metrics-server
	
	I1101 10:43:30.427213  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.96517409s)
	I1101 10:43:30.427275  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.836635399s)
	I1101 10:43:30.427306  385211 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.829057914s)
	I1101 10:43:30.427330  385211 api_server.go:72] duration metric: took 2.506172151s to wait for apiserver process to appear ...
	I1101 10:43:30.427336  385211 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:43:30.427357  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:30.434031  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:43:30.434053  385211 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:43:30.442027  385211 out.go:179] * Enabled addons: dashboard, storage-provisioner, default-storageclass
	I1101 10:43:30.443021  385211 addons.go:515] duration metric: took 2.521798237s for enable addons: enabled=[dashboard storage-provisioner default-storageclass]
	I1101 10:43:30.928254  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:30.933188  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:43:30.933224  385211 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:43:31.427738  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:31.432031  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:43:31.433126  385211 api_server.go:141] control plane version: v1.34.1
	I1101 10:43:31.433156  385211 api_server.go:131] duration metric: took 1.005812081s to wait for apiserver health ...
	I1101 10:43:31.433168  385211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:43:31.436835  385211 system_pods.go:59] 8 kube-system pods found
	I1101 10:43:31.436864  385211 system_pods.go:61] "coredns-66bc5c9577-j9pcl" [9244c7b5-e2f4-44ec-a7c9-f337e044f46e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:43:31.436872  385211 system_pods.go:61] "etcd-newest-cni-336923" [e4c9b0a5-3bfb-4e36-bc6e-fcfe9945c1f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:31.436882  385211 system_pods.go:61] "kindnet-6lbk4" [e62d231c-e1d5-4e4a-81e1-0be9614e211d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:43:31.436890  385211 system_pods.go:61] "kube-apiserver-newest-cni-336923" [f7c5c26f-4f73-459f-b72a-79f07879ab50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:31.436897  385211 system_pods.go:61] "kube-controller-manager-newest-cni-336923" [4d758565-1733-499f-ad35-853e88c03a13] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:31.436903  385211 system_pods.go:61] "kube-proxy-z65pd" [5a6496ad-eaf7-4f96-af7e-0dd5f88346c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:43:31.436910  385211 system_pods.go:61] "kube-scheduler-newest-cni-336923" [03d3cde4-6638-4fe6-949a-26f05cd8dfac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:31.436915  385211 system_pods.go:61] "storage-provisioner" [7165902e-833a-41e9-84eb-cf31f057f373] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:43:31.436924  385211 system_pods.go:74] duration metric: took 3.751261ms to wait for pod list to return data ...
	I1101 10:43:31.436933  385211 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:43:31.439538  385211 default_sa.go:45] found service account: "default"
	I1101 10:43:31.439560  385211 default_sa.go:55] duration metric: took 2.618436ms for default service account to be created ...
	I1101 10:43:31.439574  385211 kubeadm.go:587] duration metric: took 3.518414216s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:43:31.439596  385211 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:43:31.442059  385211 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:43:31.442085  385211 node_conditions.go:123] node cpu capacity is 8
	I1101 10:43:31.442098  385211 node_conditions.go:105] duration metric: took 2.496441ms to run NodePressure ...
	I1101 10:43:31.442113  385211 start.go:242] waiting for startup goroutines ...
	I1101 10:43:31.442127  385211 start.go:247] waiting for cluster config update ...
	I1101 10:43:31.442144  385211 start.go:256] writing updated cluster config ...
	I1101 10:43:31.442423  385211 ssh_runner.go:195] Run: rm -f paused
	I1101 10:43:31.493548  385211 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:43:31.495480  385211 out.go:179] * Done! kubectl is now configured to use "newest-cni-336923" cluster and "default" namespace by default
	W1101 10:43:30.628520  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:32.629114  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:35.128812  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:37.629055  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:40.128800  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:42.628540  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:44.629264  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:47.129278  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:49.628739  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:52.129160  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:54.628303  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:56.628389  380170 pod_ready.go:94] pod "coredns-66bc5c9577-v7tvt" is "Ready"
	I1101 10:43:56.628417  380170 pod_ready.go:86] duration metric: took 37.005101259s for pod "coredns-66bc5c9577-v7tvt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.630867  380170 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.634391  380170 pod_ready.go:94] pod "etcd-default-k8s-diff-port-433711" is "Ready"
	I1101 10:43:56.634415  380170 pod_ready.go:86] duration metric: took 3.522298ms for pod "etcd-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.636115  380170 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.639521  380170 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-433711" is "Ready"
	I1101 10:43:56.639545  380170 pod_ready.go:86] duration metric: took 3.405718ms for pod "kube-apiserver-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.641320  380170 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.826879  380170 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-433711" is "Ready"
	I1101 10:43:56.826906  380170 pod_ready.go:86] duration metric: took 185.570131ms for pod "kube-controller-manager-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:57.026875  380170 pod_ready.go:83] waiting for pod "kube-proxy-2g94q" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:57.426934  380170 pod_ready.go:94] pod "kube-proxy-2g94q" is "Ready"
	I1101 10:43:57.426962  380170 pod_ready.go:86] duration metric: took 400.060114ms for pod "kube-proxy-2g94q" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:57.627021  380170 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:58.027010  380170 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-433711" is "Ready"
	I1101 10:43:58.027037  380170 pod_ready.go:86] duration metric: took 399.991909ms for pod "kube-scheduler-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:58.027049  380170 pod_ready.go:40] duration metric: took 38.406562117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:43:58.070315  380170 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:43:58.071931  380170 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-433711" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:43:29 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:29.282517425Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:43:29 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:29.28629466Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:43:29 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:29.286324741Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.456538395Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4b530148-8fac-4460-a2a9-95ed4780bada name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.457458774Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5663b6b1-98e1-4869-baca-818204472644 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.458475933Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2/dashboard-metrics-scraper" id=701fd135-0f36-4cd1-a79c-c9fbd333bd4b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.458628347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.464239198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.464700481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.493710665Z" level=info msg="Created container ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2/dashboard-metrics-scraper" id=701fd135-0f36-4cd1-a79c-c9fbd333bd4b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.494205208Z" level=info msg="Starting container: ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c" id=ddf2ac11-c8ea-4bd2-80de-57d0237d6dcd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.495939132Z" level=info msg="Started container" PID=1751 containerID=ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2/dashboard-metrics-scraper id=ddf2ac11-c8ea-4bd2-80de-57d0237d6dcd name=/runtime.v1.RuntimeService/StartContainer sandboxID=87a33bbb38ddef292d98ce33d11c877d2a45f773399f294f959e3e785a51b92f
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.561566546Z" level=info msg="Removing container: a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698" id=9496230b-d72b-41ec-973a-d9d86f659852 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.57023854Z" level=info msg="Removed container a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2/dashboard-metrics-scraper" id=9496230b-d72b-41ec-973a-d9d86f659852 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.571180042Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=08a7cbe2-cfe1-42ee-b76d-6d4d2fbf42da name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.572182335Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2b1ecfb3-6178-4859-9b57-0c7e8c869efd name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.573280652Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=07018ef5-fda7-4ed8-8428-928b6e0ea4b4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.573417153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.578363314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.578491153Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1c51413804b9c85cd9f4509b1597fc4abe4a2f91694b8cefcac2151c0d08ce68/merged/etc/passwd: no such file or directory"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.578525543Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1c51413804b9c85cd9f4509b1597fc4abe4a2f91694b8cefcac2151c0d08ce68/merged/etc/group: no such file or directory"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.578773673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.604950061Z" level=info msg="Created container fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f: kube-system/storage-provisioner/storage-provisioner" id=07018ef5-fda7-4ed8-8428-928b6e0ea4b4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.605509473Z" level=info msg="Starting container: fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f" id=32305199-fedb-4c4e-a646-157a9a8624b2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.607402679Z" level=info msg="Started container" PID=1765 containerID=fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f description=kube-system/storage-provisioner/storage-provisioner id=32305199-fedb-4c4e-a646-157a9a8624b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae2c68ac30995f3ea9cd808bc8865f9030055807d5937530f59ead5dfcbe53b6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	fe923ee8c80e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   ae2c68ac30995       storage-provisioner                                    kube-system
	ff771c6cd6560       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   87a33bbb38dde       dashboard-metrics-scraper-6ffb444bf9-wrzq2             kubernetes-dashboard
	2627f9ff573b6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   5db16fe909cbb       kubernetes-dashboard-855c9754f9-fbhvp                  kubernetes-dashboard
	20518f3b36581       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   4c74bc691ead4       busybox                                                default
	f480d182aec70       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   83aa98f2cf102       coredns-66bc5c9577-v7tvt                               kube-system
	8bacee2ea78c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   ae2c68ac30995       storage-provisioner                                    kube-system
	56f78b23eff04       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   f216498248b7c       kube-proxy-2g94q                                       kube-system
	268037ed92509       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   b28c9886fe3eb       kindnet-f2zwl                                          kube-system
	a47e2aa79fa21       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   2650bf242359a       kube-apiserver-default-k8s-diff-port-433711            kube-system
	ee4bc7ee40943       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   2b0eefdc3fc56       kube-scheduler-default-k8s-diff-port-433711            kube-system
	ba242b116eb39       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   7e9788e3ad063       kube-controller-manager-default-k8s-diff-port-433711   kube-system
	8083c17bef04a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   78068b36711d0       etcd-default-k8s-diff-port-433711                      kube-system
	
	
	==> coredns [f480d182aec70d74af5c26d6b5649b3d392235fd29b5f2b0e869a42a8aab1142] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50940 - 29971 "HINFO IN 7321720219376195779.8126889959094793254. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031465491s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-433711
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-433711
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=default-k8s-diff-port-433711
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-433711
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:44:09 +0000   Sat, 01 Nov 2025 10:41:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:44:09 +0000   Sat, 01 Nov 2025 10:41:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:44:09 +0000   Sat, 01 Nov 2025 10:41:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:44:09 +0000   Sat, 01 Nov 2025 10:42:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-433711
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e1d5f657-b6a1-42bf-b6a8-18744a9a0476
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-v7tvt                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m17s
	  kube-system                 etcd-default-k8s-diff-port-433711                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m24s
	  kube-system                 kindnet-f2zwl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-433711             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-433711    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-2g94q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-default-k8s-diff-port-433711             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wrzq2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fbhvp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  Starting                 2m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s (x8 over 2m27s)  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m23s                  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m23s                  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m23s                  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m18s                  node-controller  Node default-k8s-diff-port-433711 event: Registered Node default-k8s-diff-port-433711 in Controller
	  Normal  NodeReady                96s                    kubelet          Node default-k8s-diff-port-433711 status is now: NodeReady
	  Normal  Starting                 57s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)      kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)      kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)      kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                    node-controller  Node default-k8s-diff-port-433711 event: Registered Node default-k8s-diff-port-433711 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	[Nov 1 10:42] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [8083c17bef04a52cbc3835ee9f8f046af5ef91f84b3497be4940886ec319826a] <==
	{"level":"warn","ts":"2025-11-01T10:43:17.571156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.577626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.597572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.604807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.618241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.624450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.632519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.639218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.646622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.653757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.661127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.667855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.674791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.682823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.689904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.696808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.704892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.712537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.720141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.734697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.738103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.747029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.753805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.815577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54516","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:43:26.269986Z","caller":"traceutil/trace.go:172","msg":"trace[276127190] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"114.126473ms","start":"2025-11-01T10:43:26.155836Z","end":"2025-11-01T10:43:26.269962Z","steps":["trace[276127190] 'process raft request'  (duration: 110.940695ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:44:13 up  2:26,  0 user,  load average: 2.79, 3.60, 2.56
	Linux default-k8s-diff-port-433711 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [268037ed9250984f4892d07ede3dc1caa15abd0d0ee1e13d165836a8f5d56237] <==
	I1101 10:43:19.058140       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:43:19.060022       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:43:19.060290       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:43:19.060313       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:43:19.060342       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:43:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:43:19.263404       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:43:19.263443       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:43:19.263455       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:43:19.263589       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:43:19.663921       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:43:19.663945       1 metrics.go:72] Registering metrics
	I1101 10:43:19.664004       1 controller.go:711] "Syncing nftables rules"
	I1101 10:43:29.263754       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:43:29.263820       1 main.go:301] handling current node
	I1101 10:43:39.263764       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:43:39.263795       1 main.go:301] handling current node
	I1101 10:43:49.263201       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:43:49.263252       1 main.go:301] handling current node
	I1101 10:43:59.263162       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:43:59.263213       1 main.go:301] handling current node
	I1101 10:44:09.269395       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:44:09.269439       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a47e2aa79fa21a30c460b676774cdb84b1d8dccc92e263a4ff967b3e351c7284] <==
	I1101 10:43:18.353084       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:43:18.353095       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:43:18.353239       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:43:18.353248       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:43:18.353548       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:43:18.353653       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:43:18.353663       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:43:18.353788       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:43:18.353930       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:43:18.353983       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:43:18.354465       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:43:18.360295       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 10:43:18.366910       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:43:18.388893       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:43:18.502312       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:43:18.801316       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:43:18.891296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:43:18.917948       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:43:18.934545       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:43:18.990085       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.97.177"}
	I1101 10:43:19.005969       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.249.188"}
	I1101 10:43:19.257041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:43:21.433229       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:43:21.484364       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:43:21.582993       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ba242b116eb3920e232ebbe1eb907d675c7b4d49cc536f95a10a281b6e468a77] <==
	I1101 10:43:21.060482       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:43:21.060541       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:43:21.060551       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:43:21.060556       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:43:21.062803       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:43:21.063969       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:43:21.071246       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:43:21.079645       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:43:21.079688       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:43:21.079724       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:43:21.079771       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:43:21.079895       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:43:21.079935       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:43:21.079978       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-433711"
	I1101 10:43:21.080027       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:43:21.080164       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:43:21.080256       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:43:21.080342       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:43:21.080863       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:43:21.080887       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:43:21.082146       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:43:21.082198       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:43:21.086173       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:43:21.088669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:43:21.098952       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [56f78b23eff04b03a0c09efc93f2ddd6f650e3db549d5fbe24b8463049729188] <==
	I1101 10:43:18.925611       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:43:19.015383       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:43:19.116425       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:43:19.116456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:43:19.116542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:43:19.139090       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:43:19.139149       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:43:19.144744       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:43:19.145145       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:43:19.145187       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:19.146954       1 config.go:200] "Starting service config controller"
	I1101 10:43:19.146981       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:43:19.147034       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:43:19.147047       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:43:19.147066       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:43:19.147074       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:43:19.147102       1 config.go:309] "Starting node config controller"
	I1101 10:43:19.147111       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:43:19.247195       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:43:19.247196       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:43:19.247238       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:43:19.247305       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ee4bc7ee409435014537fb2e187082556b0eb41b0a940a43ed6a16f657936a76] <==
	I1101 10:43:17.512334       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:43:19.601001       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:43:19.601102       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:19.606182       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:43:19.606321       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:43:19.606284       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:43:19.606479       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:43:19.606258       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:19.606797       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:19.606681       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:43:19.606703       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:43:19.706778       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:43:19.706889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:19.707045       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:43:21 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:21.817520     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fd3ea554-304d-4143-ab2e-461ce7d2077c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fbhvp\" (UID: \"fd3ea554-304d-4143-ab2e-461ce7d2077c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbhvp"
	Nov 01 10:43:21 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:21.817617     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54xb\" (UniqueName: \"kubernetes.io/projected/fd3ea554-304d-4143-ab2e-461ce7d2077c-kube-api-access-f54xb\") pod \"kubernetes-dashboard-855c9754f9-fbhvp\" (UID: \"fd3ea554-304d-4143-ab2e-461ce7d2077c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbhvp"
	Nov 01 10:43:24 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:24.499662     724 scope.go:117] "RemoveContainer" containerID="20f26770d6752c6319d3409fcf7d94ab1abe42f74db1d975151fab98527fa443"
	Nov 01 10:43:25 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:25.504939     724 scope.go:117] "RemoveContainer" containerID="20f26770d6752c6319d3409fcf7d94ab1abe42f74db1d975151fab98527fa443"
	Nov 01 10:43:25 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:25.505102     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:25 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:25.505300     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:43:26 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:26.400555     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:43:26 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:26.511042     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:26 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:26.511204     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:43:28 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:28.529517     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbhvp" podStartSLOduration=1.533801211 podStartE2EDuration="7.529479107s" podCreationTimestamp="2025-11-01 10:43:21 +0000 UTC" firstStartedPulling="2025-11-01 10:43:22.034019653 +0000 UTC m=+5.670022799" lastFinishedPulling="2025-11-01 10:43:28.029697546 +0000 UTC m=+11.665700695" observedRunningTime="2025-11-01 10:43:28.529477327 +0000 UTC m=+12.165480495" watchObservedRunningTime="2025-11-01 10:43:28.529479107 +0000 UTC m=+12.165482275"
	Nov 01 10:43:32 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:32.529748     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:32 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:32.529915     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:43:46 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:46.456055     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:46 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:46.560274     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:46 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:46.560523     724 scope.go:117] "RemoveContainer" containerID="ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	Nov 01 10:43:46 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:46.560728     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:43:49 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:49.570773     724 scope.go:117] "RemoveContainer" containerID="8bacee2ea78c4e9dab8336d76dde4c00d1dd3fae53ccb6cf5794cb16154e9f2d"
	Nov 01 10:43:52 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:52.530012     724 scope.go:117] "RemoveContainer" containerID="ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	Nov 01 10:43:52 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:52.530220     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:44:03 default-k8s-diff-port-433711 kubelet[724]: I1101 10:44:03.455761     724 scope.go:117] "RemoveContainer" containerID="ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	Nov 01 10:44:03 default-k8s-diff-port-433711 kubelet[724]: E1101 10:44:03.456002     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:44:10 default-k8s-diff-port-433711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:44:10 default-k8s-diff-port-433711 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:44:10 default-k8s-diff-port-433711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:44:10 default-k8s-diff-port-433711 systemd[1]: kubelet.service: Consumed 1.636s CPU time.
	
	
	==> kubernetes-dashboard [2627f9ff573b62e65714ef8ce20547c7a9346b10aa430a1b15470b7601f6ba12] <==
	2025/11/01 10:43:28 Starting overwatch
	2025/11/01 10:43:28 Using namespace: kubernetes-dashboard
	2025/11/01 10:43:28 Using in-cluster config to connect to apiserver
	2025/11/01 10:43:28 Using secret token for csrf signing
	2025/11/01 10:43:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:43:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:43:28 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:43:28 Generating JWE encryption key
	2025/11/01 10:43:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:43:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:43:28 Initializing JWE encryption key from synchronized object
	2025/11/01 10:43:28 Creating in-cluster Sidecar client
	2025/11/01 10:43:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:43:28 Serving insecurely on HTTP port: 9090
	2025/11/01 10:43:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8bacee2ea78c4e9dab8336d76dde4c00d1dd3fae53ccb6cf5794cb16154e9f2d] <==
	I1101 10:43:18.881143       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:43:48.883898       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f] <==
	I1101 10:43:49.618986       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:43:49.625998       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:43:49.626046       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:43:49.628056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:53.083315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:57.343884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:00.942093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:03.995973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:07.018928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:07.024596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:44:07.024722       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:44:07.024865       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433711_0981853e-22c3-4990-aedb-5943cbfc8d42!
	I1101 10:44:07.024862       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"177ac40d-31f6-48f5-be20-6d54b17caa55", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-433711_0981853e-22c3-4990-aedb-5943cbfc8d42 became leader
	W1101 10:44:07.026758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:07.033667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:44:07.125148       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433711_0981853e-22c3-4990-aedb-5943cbfc8d42!
	W1101 10:44:09.037258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:09.041546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:11.044622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:11.049440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:13.053312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:13.057162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711: exit status 2 (313.28355ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-433711 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-433711
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-433711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2",
	        "Created": "2025-11-01T10:41:33.057243261Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 380501,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T10:43:09.687575431Z",
	            "FinishedAt": "2025-11-01T10:43:08.800507891Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/hosts",
	        "LogPath": "/var/lib/docker/containers/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2/b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2-json.log",
	        "Name": "/default-k8s-diff-port-433711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-433711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-433711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b9f86e35d4b209a005572ed4ba45b37b0d1129c8ad515ca43cf168f05ac4f6b2",
	                "LowerDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409-init/diff:/var/lib/docker/overlay2/a27b20dd4c3bdfd665e4122a9bc67478648c210179318e61b9d661b1928f9826/diff",
	                "MergedDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409/merged",
	                "UpperDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409/diff",
	                "WorkDir": "/var/lib/docker/overlay2/16690dfe0c3846cfaa0757431febd471d4e0256ccbe75ee197cfc5d26c1c1409/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-433711",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-433711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-433711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-433711",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-433711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3481fd5ad4e3d37befbf317d24bb644bf33da0800ae850dbeaded24ffb9ca37a",
	            "SandboxKey": "/var/run/docker/netns/3481fd5ad4e3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-433711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:82:9f:23:67:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0395ef9fed2dfe179301e2f7acf97030a23523642ea4cc41f18d2b39a90a95e0",
	                    "EndpointID": "a72c3a8a190dcfa5cf275eb5a65d366de200e39841b6014c17e5aae3e7cf9c15",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-433711",
	                        "b9f86e35d4b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711: exit status 2 (318.478419ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-433711 logs -n 25
E1101 10:44:14.929567   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/bridge-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:14.935946   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/bridge-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:14.947509   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/bridge-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:14.969175   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/bridge-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:15.011083   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/bridge-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:15.093293   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/bridge-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:44:15.255433   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/bridge-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-433711 logs -n 25: (1.044424656s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-707467 image list --format=json                                                                                                                                                                                               │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ pause   │ -p old-k8s-version-707467 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ delete  │ -p no-preload-753486                                                                                                                                                                                                                          │ no-preload-753486            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p old-k8s-version-707467                                                                                                                                                                                                                     │ old-k8s-version-707467       │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-433711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-433711 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:42 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-336923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-433711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ image   │ embed-certs-071527 image list --format=json                                                                                                                                                                                                   │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p embed-certs-071527 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ stop    │ -p newest-cni-336923 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p embed-certs-071527                                                                                                                                                                                                                         │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-336923 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p embed-certs-071527                                                                                                                                                                                                                         │ embed-certs-071527           │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ start   │ -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ image   │ newest-cni-336923 image list --format=json                                                                                                                                                                                                    │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ pause   │ -p newest-cni-336923 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │                     │
	│ delete  │ -p newest-cni-336923                                                                                                                                                                                                                          │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ delete  │ -p newest-cni-336923                                                                                                                                                                                                                          │ newest-cni-336923            │ jenkins │ v1.37.0 │ 01 Nov 25 10:43 UTC │ 01 Nov 25 10:43 UTC │
	│ image   │ default-k8s-diff-port-433711 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │ 01 Nov 25 10:44 UTC │
	│ pause   │ -p default-k8s-diff-port-433711 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-433711 │ jenkins │ v1.37.0 │ 01 Nov 25 10:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:43:19
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:43:19.562995  385211 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:43:19.563161  385211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:19.563172  385211 out.go:374] Setting ErrFile to fd 2...
	I1101 10:43:19.563179  385211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:43:19.563441  385211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:43:19.563922  385211 out.go:368] Setting JSON to false
	I1101 10:43:19.565279  385211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8739,"bootTime":1761985060,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:43:19.565370  385211 start.go:143] virtualization: kvm guest
	I1101 10:43:19.567110  385211 out.go:179] * [newest-cni-336923] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:43:19.568689  385211 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:43:19.568720  385211 notify.go:221] Checking for updates...
	I1101 10:43:19.570960  385211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:43:19.572305  385211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:19.573417  385211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:43:19.574730  385211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:43:19.576048  385211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:43:19.577590  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:19.578285  385211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:43:19.605771  385211 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:43:19.605883  385211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:19.665006  385211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:43:19.654595853 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:19.665205  385211 docker.go:319] overlay module found
	I1101 10:43:19.666673  385211 out.go:179] * Using the docker driver based on existing profile
	I1101 10:43:19.667653  385211 start.go:309] selected driver: docker
	I1101 10:43:19.667667  385211 start.go:930] validating driver "docker" against &{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:19.667749  385211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:43:19.668238  385211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:43:19.729845  385211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:43:19.718798371 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:43:19.730108  385211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:43:19.730135  385211 cni.go:84] Creating CNI manager for ""
	I1101 10:43:19.730186  385211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:19.730221  385211 start.go:353] cluster config:
	{Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:19.731861  385211 out.go:179] * Starting "newest-cni-336923" primary control-plane node in "newest-cni-336923" cluster
	I1101 10:43:19.732887  385211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 10:43:19.733870  385211 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 10:43:19.734910  385211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:19.734977  385211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:43:19.734992  385211 cache.go:59] Caching tarball of preloaded images
	I1101 10:43:19.735037  385211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 10:43:19.735072  385211 preload.go:233] Found /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:43:19.735085  385211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:43:19.735216  385211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json ...
	I1101 10:43:19.759288  385211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 10:43:19.759307  385211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 10:43:19.759322  385211 cache.go:233] Successfully downloaded all kic artifacts
	I1101 10:43:19.759351  385211 start.go:360] acquireMachinesLock for newest-cni-336923: {Name:mk078b1ded54eaee8a26288c21e4405f07864b1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:43:19.759448  385211 start.go:364] duration metric: took 51.416µs to acquireMachinesLock for "newest-cni-336923"
	I1101 10:43:19.759473  385211 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:43:19.759483  385211 fix.go:54] fixHost starting: 
	I1101 10:43:19.759794  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:19.777831  385211 fix.go:112] recreateIfNeeded on newest-cni-336923: state=Stopped err=<nil>
	W1101 10:43:19.777879  385211 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:43:19.580383  380170 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 10:43:19.585617  380170 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 10:43:19.586638  380170 api_server.go:141] control plane version: v1.34.1
	I1101 10:43:19.586665  380170 api_server.go:131] duration metric: took 506.95003ms to wait for apiserver health ...
	I1101 10:43:19.586676  380170 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:43:19.590035  380170 system_pods.go:59] 8 kube-system pods found
	I1101 10:43:19.590088  380170 system_pods.go:61] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:19.590104  380170 system_pods.go:61] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:19.590111  380170 system_pods.go:61] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:43:19.590119  380170 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:19.590131  380170 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:19.590137  380170 system_pods.go:61] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:43:19.590144  380170 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:19.590149  380170 system_pods.go:61] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Running
	I1101 10:43:19.590164  380170 system_pods.go:74] duration metric: took 3.480437ms to wait for pod list to return data ...
	I1101 10:43:19.590176  380170 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:43:19.592779  380170 default_sa.go:45] found service account: "default"
	I1101 10:43:19.592800  380170 default_sa.go:55] duration metric: took 2.617606ms for default service account to be created ...
	I1101 10:43:19.592810  380170 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:43:19.595723  380170 system_pods.go:86] 8 kube-system pods found
	I1101 10:43:19.595754  380170 system_pods.go:89] "coredns-66bc5c9577-v7tvt" [a952ead8-9f44-4ac5-8145-2a76d6bc46a3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:19.595765  380170 system_pods.go:89] "etcd-default-k8s-diff-port-433711" [03f82a85-2558-4e7c-9756-eb6810bc1b13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:19.595781  380170 system_pods.go:89] "kindnet-f2zwl" [750d06bb-d295-4d98-b8e4-71984b10453c] Running
	I1101 10:43:19.595789  380170 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-433711" [8cfed49f-4167-42a6-9f31-322f7bf9f39e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:19.595799  380170 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-433711" [3f857e79-5248-4153-bd5f-32d20991bbe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:19.595805  380170 system_pods.go:89] "kube-proxy-2g94q" [18217a2b-fb40-4fb2-9674-0194a9462c32] Running
	I1101 10:43:19.595813  380170 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-433711" [6eb0db97-ae19-467b-a720-05a325a78c1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:19.595818  380170 system_pods.go:89] "storage-provisioner" [93198445-c661-4c14-bb6f-2e13eb9c10ea] Running
	I1101 10:43:19.595829  380170 system_pods.go:126] duration metric: took 3.011558ms to wait for k8s-apps to be running ...
	I1101 10:43:19.595837  380170 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:43:19.595885  380170 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:43:19.612808  380170 system_svc.go:56] duration metric: took 16.960672ms WaitForService to wait for kubelet
	I1101 10:43:19.612844  380170 kubeadm.go:587] duration metric: took 2.485063342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:43:19.612867  380170 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:43:19.616298  380170 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:43:19.616333  380170 node_conditions.go:123] node cpu capacity is 8
	I1101 10:43:19.616349  380170 node_conditions.go:105] duration metric: took 3.477231ms to run NodePressure ...
	I1101 10:43:19.616364  380170 start.go:242] waiting for startup goroutines ...
	I1101 10:43:19.616379  380170 start.go:247] waiting for cluster config update ...
	I1101 10:43:19.616401  380170 start.go:256] writing updated cluster config ...
	I1101 10:43:19.616752  380170 ssh_runner.go:195] Run: rm -f paused
	I1101 10:43:19.620456  380170 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:43:19.623291  380170 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v7tvt" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:43:21.628140  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:23.630013  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:19.779427  385211 out.go:252] * Restarting existing docker container for "newest-cni-336923" ...
	I1101 10:43:19.779489  385211 cli_runner.go:164] Run: docker start newest-cni-336923
	I1101 10:43:20.014386  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:20.033355  385211 kic.go:430] container "newest-cni-336923" state is running.
	I1101 10:43:20.033776  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:20.051719  385211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/config.json ...
	I1101 10:43:20.051923  385211 machine.go:94] provisionDockerMachine start ...
	I1101 10:43:20.051985  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:20.069646  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:20.069891  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:20.069906  385211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:43:20.070476  385211 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58296->127.0.0.1:33133: read: connection reset by peer
	I1101 10:43:23.216448  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-336923
	
	I1101 10:43:23.216483  385211 ubuntu.go:182] provisioning hostname "newest-cni-336923"
	I1101 10:43:23.216574  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:23.239604  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:23.240021  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:23.240050  385211 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-336923 && echo "newest-cni-336923" | sudo tee /etc/hostname
	I1101 10:43:23.406412  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-336923
	
	I1101 10:43:23.406490  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:23.430458  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:23.430817  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:23.430849  385211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-336923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-336923/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-336923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:43:23.584527  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:43:23.584561  385211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21830-58021/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-58021/.minikube}
	I1101 10:43:23.584586  385211 ubuntu.go:190] setting up certificates
	I1101 10:43:23.584599  385211 provision.go:84] configureAuth start
	I1101 10:43:23.584671  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:23.606864  385211 provision.go:143] copyHostCerts
	I1101 10:43:23.606939  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem, removing ...
	I1101 10:43:23.606959  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem
	I1101 10:43:23.607044  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/ca.pem (1082 bytes)
	I1101 10:43:23.607184  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem, removing ...
	I1101 10:43:23.607198  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem
	I1101 10:43:23.607244  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/cert.pem (1123 bytes)
	I1101 10:43:23.607352  385211 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem, removing ...
	I1101 10:43:23.607365  385211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem
	I1101 10:43:23.607400  385211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-58021/.minikube/key.pem (1675 bytes)
	I1101 10:43:23.607554  385211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem org=jenkins.newest-cni-336923 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-336923]
	I1101 10:43:24.105760  385211 provision.go:177] copyRemoteCerts
	I1101 10:43:24.105843  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:43:24.105901  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.123234  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.223265  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:43:24.240358  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:43:24.257152  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:43:24.273645  385211 provision.go:87] duration metric: took 689.027992ms to configureAuth
	I1101 10:43:24.273673  385211 ubuntu.go:206] setting minikube options for container-runtime
	I1101 10:43:24.273876  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:24.274012  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.291114  385211 main.go:143] libmachine: Using SSH client type: native
	I1101 10:43:24.291345  385211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 10:43:24.291367  385211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:43:24.560882  385211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:43:24.560915  385211 machine.go:97] duration metric: took 4.508974654s to provisionDockerMachine
	I1101 10:43:24.560932  385211 start.go:293] postStartSetup for "newest-cni-336923" (driver="docker")
	I1101 10:43:24.560965  385211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:43:24.561042  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:43:24.561104  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.581756  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.682079  385211 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:43:24.685513  385211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 10:43:24.685538  385211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 10:43:24.685552  385211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/addons for local assets ...
	I1101 10:43:24.685593  385211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-58021/.minikube/files for local assets ...
	I1101 10:43:24.685674  385211 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem -> 615222.pem in /etc/ssl/certs
	I1101 10:43:24.685761  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:43:24.693293  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:43:24.710868  385211 start.go:296] duration metric: took 149.921905ms for postStartSetup
	I1101 10:43:24.710959  385211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:43:24.711009  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.727702  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.823431  385211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 10:43:24.828000  385211 fix.go:56] duration metric: took 5.068504403s for fixHost
	I1101 10:43:24.828024  385211 start.go:83] releasing machines lock for "newest-cni-336923", held for 5.068561902s
	I1101 10:43:24.828091  385211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-336923
	I1101 10:43:24.845157  385211 ssh_runner.go:195] Run: cat /version.json
	I1101 10:43:24.845213  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.845273  385211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:43:24.845342  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:24.863014  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:24.863284  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:25.013866  385211 ssh_runner.go:195] Run: systemctl --version
	I1101 10:43:25.020582  385211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:43:25.057023  385211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:43:25.062007  385211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:43:25.062060  385211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:43:25.070026  385211 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:43:25.070050  385211 start.go:496] detecting cgroup driver to use...
	I1101 10:43:25.070082  385211 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 10:43:25.070139  385211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:43:25.085382  385211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:43:25.098030  385211 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:43:25.098075  385211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:43:25.111846  385211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:43:25.123714  385211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:43:25.203249  385211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:43:25.286193  385211 docker.go:234] disabling docker service ...
	I1101 10:43:25.286274  385211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:43:25.300278  385211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:43:25.312521  385211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:43:25.424819  385211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:43:25.535913  385211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:43:25.552035  385211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:43:25.570081  385211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:43:25.570141  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.581526  385211 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 10:43:25.581590  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.592157  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.602648  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.613285  385211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:43:25.623297  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.634452  385211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.644745  385211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:43:25.654826  385211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:43:25.663288  385211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:43:25.672545  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:25.774692  385211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:43:26.389931  385211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:43:26.390002  385211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:43:26.395453  385211 start.go:564] Will wait 60s for crictl version
	I1101 10:43:26.395532  385211 ssh_runner.go:195] Run: which crictl
	I1101 10:43:26.400212  385211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 10:43:26.432448  385211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 10:43:26.432553  385211 ssh_runner.go:195] Run: crio --version
	I1101 10:43:26.469531  385211 ssh_runner.go:195] Run: crio --version
	I1101 10:43:26.509599  385211 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 10:43:26.510918  385211 cli_runner.go:164] Run: docker network inspect newest-cni-336923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 10:43:26.532517  385211 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1101 10:43:26.537439  385211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:43:26.551090  385211 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 10:43:26.552134  385211 kubeadm.go:884] updating cluster {Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:43:26.552309  385211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:43:26.552371  385211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:43:26.592302  385211 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:43:26.592326  385211 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:43:26.592385  385211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:43:26.623998  385211 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:43:26.624025  385211 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:43:26.624035  385211 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1101 10:43:26.624170  385211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-336923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:43:26.624265  385211 ssh_runner.go:195] Run: crio config
	I1101 10:43:26.674400  385211 cni.go:84] Creating CNI manager for ""
	I1101 10:43:26.674422  385211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 10:43:26.674440  385211 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 10:43:26.674462  385211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-336923 NodeName:newest-cni-336923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:43:26.674609  385211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-336923"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:43:26.674672  385211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:43:26.684472  385211 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:43:26.684555  385211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:43:26.693298  385211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1101 10:43:26.708330  385211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:43:26.723417  385211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1101 10:43:26.738609  385211 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1101 10:43:26.743102  385211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:43:26.754490  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:26.860261  385211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:43:26.888382  385211 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923 for IP: 192.168.85.2
	I1101 10:43:26.888407  385211 certs.go:195] generating shared ca certs ...
	I1101 10:43:26.888429  385211 certs.go:227] acquiring lock for ca certs: {Name:mkaccd8865836adb393bd36d5021597e578e59f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:26.888637  385211 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key
	I1101 10:43:26.888701  385211 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key
	I1101 10:43:26.888718  385211 certs.go:257] generating profile certs ...
	I1101 10:43:26.888850  385211 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/client.key
	I1101 10:43:26.888933  385211 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.key.243c0d0d
	I1101 10:43:26.888995  385211 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.key
	I1101 10:43:26.889152  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem (1338 bytes)
	W1101 10:43:26.889197  385211 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522_empty.pem, impossibly tiny 0 bytes
	I1101 10:43:26.889212  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:43:26.889244  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:43:26.889284  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:43:26.889316  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/certs/key.pem (1675 bytes)
	I1101 10:43:26.889372  385211 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem (1708 bytes)
	I1101 10:43:26.890238  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:43:26.915760  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:43:26.940726  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:43:26.964835  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:43:26.990573  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 10:43:27.013067  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:43:27.036519  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:43:27.059066  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/newest-cni-336923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:43:27.081056  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/ssl/certs/615222.pem --> /usr/share/ca-certificates/615222.pem (1708 bytes)
	I1101 10:43:27.103980  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:43:27.126512  385211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-58021/.minikube/certs/61522.pem --> /usr/share/ca-certificates/61522.pem (1338 bytes)
	I1101 10:43:27.149504  385211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:43:27.165265  385211 ssh_runner.go:195] Run: openssl version
	I1101 10:43:27.172420  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/61522.pem && ln -fs /usr/share/ca-certificates/61522.pem /etc/ssl/certs/61522.pem"
	I1101 10:43:27.183610  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.188666  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:01 /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.188723  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/61522.pem
	I1101 10:43:27.245581  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/61522.pem /etc/ssl/certs/51391683.0"
	I1101 10:43:27.256943  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/615222.pem && ln -fs /usr/share/ca-certificates/615222.pem /etc/ssl/certs/615222.pem"
	I1101 10:43:27.268346  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.273383  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:01 /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.273441  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/615222.pem
	I1101 10:43:27.329279  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/615222.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:43:27.340693  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:43:27.351249  385211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.355345  385211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.355403  385211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:43:27.414180  385211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:43:27.426101  385211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:43:27.430861  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:43:27.486012  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:43:27.563512  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:43:27.622833  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:43:27.682140  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:43:27.737630  385211 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:43:27.793353  385211 kubeadm.go:401] StartCluster: {Name:newest-cni-336923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-336923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:43:27.793475  385211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:43:27.793563  385211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:43:27.831673  385211 cri.go:89] found id: ""
	I1101 10:43:27.831737  385211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:43:27.840098  385211 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:43:27.840120  385211 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:43:27.840169  385211 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:43:27.847934  385211 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:43:27.848632  385211 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-336923" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:27.848984  385211 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-58021/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-336923" cluster setting kubeconfig missing "newest-cni-336923" context setting]
	I1101 10:43:27.849613  385211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.876052  385211 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:43:27.884631  385211 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1101 10:43:27.884663  385211 kubeadm.go:602] duration metric: took 44.535917ms to restartPrimaryControlPlane
	I1101 10:43:27.884674  385211 kubeadm.go:403] duration metric: took 91.333695ms to StartCluster
	I1101 10:43:27.884693  385211 settings.go:142] acquiring lock: {Name:mka443f0ac99a59b23190497686b8296dc73358d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.884762  385211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:43:27.885777  385211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-58021/kubeconfig: {Name:mk5f6e568b2b1908a0a2764cd109c02c3a6cce13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:43:27.921113  385211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:43:27.921231  385211 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:43:27.921378  385211 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-336923"
	I1101 10:43:27.921390  385211 config.go:182] Loaded profile config "newest-cni-336923": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:43:27.921404  385211 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-336923"
	W1101 10:43:27.921414  385211 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:43:27.921408  385211 addons.go:70] Setting dashboard=true in profile "newest-cni-336923"
	I1101 10:43:27.921422  385211 addons.go:70] Setting default-storageclass=true in profile "newest-cni-336923"
	I1101 10:43:27.921443  385211 addons.go:239] Setting addon dashboard=true in "newest-cni-336923"
	I1101 10:43:27.921448  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	W1101 10:43:27.921455  385211 addons.go:248] addon dashboard should already be in state true
	I1101 10:43:27.921458  385211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-336923"
	I1101 10:43:27.921512  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:27.921788  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.921874  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.921878  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.922943  385211 out.go:179] * Verifying Kubernetes components...
	I1101 10:43:27.926542  385211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:43:27.948055  385211 addons.go:239] Setting addon default-storageclass=true in "newest-cni-336923"
	W1101 10:43:27.948083  385211 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:43:27.948115  385211 host.go:66] Checking if "newest-cni-336923" exists ...
	I1101 10:43:27.948592  385211 cli_runner.go:164] Run: docker container inspect newest-cni-336923 --format={{.State.Status}}
	I1101 10:43:27.949931  385211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:43:27.951582  385211 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 10:43:27.951591  385211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:27.951717  385211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:43:27.951785  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.952894  385211 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1101 10:43:25.630441  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:28.132297  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:27.954213  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 10:43:27.954233  385211 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 10:43:27.954294  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.975475  385211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:27.975514  385211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:43:27.975584  385211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-336923
	I1101 10:43:27.976243  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:27.982565  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:28.003386  385211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/newest-cni-336923/id_rsa Username:docker}
	I1101 10:43:28.069464  385211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:43:28.097817  385211 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:43:28.097875  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:28.097884  385211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:43:28.099383  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 10:43:28.099403  385211 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 10:43:28.122313  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:28.122561  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 10:43:28.122586  385211 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 10:43:28.149303  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 10:43:28.149330  385211 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 10:43:28.174173  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 10:43:28.174199  385211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 10:43:28.195857  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 10:43:28.195884  385211 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1101 10:43:28.201388  385211 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.201435  385211 retry.go:31] will retry after 259.751612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 10:43:28.218472  385211 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.218525  385211 retry.go:31] will retry after 370.922823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 10:43:28.219522  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 10:43:28.219550  385211 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 10:43:28.237475  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 10:43:28.237512  385211 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 10:43:28.253898  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 10:43:28.253926  385211 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 10:43:28.267526  385211 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:43:28.267552  385211 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 10:43:28.280403  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 10:43:28.462006  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:43:28.590605  385211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:43:28.598216  385211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:43:30.341998  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.061546929s)
	I1101 10:43:30.343194  385211 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-336923 addons enable metrics-server
	
	I1101 10:43:30.427213  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.96517409s)
	I1101 10:43:30.427275  385211 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.836635399s)
	I1101 10:43:30.427306  385211 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.829057914s)
	I1101 10:43:30.427330  385211 api_server.go:72] duration metric: took 2.506172151s to wait for apiserver process to appear ...
	I1101 10:43:30.427336  385211 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:43:30.427357  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:30.434031  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:43:30.434053  385211 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:43:30.442027  385211 out.go:179] * Enabled addons: dashboard, storage-provisioner, default-storageclass
	I1101 10:43:30.443021  385211 addons.go:515] duration metric: took 2.521798237s for enable addons: enabled=[dashboard storage-provisioner default-storageclass]
	I1101 10:43:30.928254  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:30.933188  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:43:30.933224  385211 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:43:31.427738  385211 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1101 10:43:31.432031  385211 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1101 10:43:31.433126  385211 api_server.go:141] control plane version: v1.34.1
	I1101 10:43:31.433156  385211 api_server.go:131] duration metric: took 1.005812081s to wait for apiserver health ...
	I1101 10:43:31.433168  385211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:43:31.436835  385211 system_pods.go:59] 8 kube-system pods found
	I1101 10:43:31.436864  385211 system_pods.go:61] "coredns-66bc5c9577-j9pcl" [9244c7b5-e2f4-44ec-a7c9-f337e044f46e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:43:31.436872  385211 system_pods.go:61] "etcd-newest-cni-336923" [e4c9b0a5-3bfb-4e36-bc6e-fcfe9945c1f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:43:31.436882  385211 system_pods.go:61] "kindnet-6lbk4" [e62d231c-e1d5-4e4a-81e1-0be9614e211d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 10:43:31.436890  385211 system_pods.go:61] "kube-apiserver-newest-cni-336923" [f7c5c26f-4f73-459f-b72a-79f07879ab50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:43:31.436897  385211 system_pods.go:61] "kube-controller-manager-newest-cni-336923" [4d758565-1733-499f-ad35-853e88c03a13] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:43:31.436903  385211 system_pods.go:61] "kube-proxy-z65pd" [5a6496ad-eaf7-4f96-af7e-0dd5f88346c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:43:31.436910  385211 system_pods.go:61] "kube-scheduler-newest-cni-336923" [03d3cde4-6638-4fe6-949a-26f05cd8dfac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:43:31.436915  385211 system_pods.go:61] "storage-provisioner" [7165902e-833a-41e9-84eb-cf31f057f373] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 10:43:31.436924  385211 system_pods.go:74] duration metric: took 3.751261ms to wait for pod list to return data ...
	I1101 10:43:31.436933  385211 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:43:31.439538  385211 default_sa.go:45] found service account: "default"
	I1101 10:43:31.439560  385211 default_sa.go:55] duration metric: took 2.618436ms for default service account to be created ...
	I1101 10:43:31.439574  385211 kubeadm.go:587] duration metric: took 3.518414216s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 10:43:31.439596  385211 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:43:31.442059  385211 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 10:43:31.442085  385211 node_conditions.go:123] node cpu capacity is 8
	I1101 10:43:31.442098  385211 node_conditions.go:105] duration metric: took 2.496441ms to run NodePressure ...
	I1101 10:43:31.442113  385211 start.go:242] waiting for startup goroutines ...
	I1101 10:43:31.442127  385211 start.go:247] waiting for cluster config update ...
	I1101 10:43:31.442144  385211 start.go:256] writing updated cluster config ...
	I1101 10:43:31.442423  385211 ssh_runner.go:195] Run: rm -f paused
	I1101 10:43:31.493548  385211 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:43:31.495480  385211 out.go:179] * Done! kubectl is now configured to use "newest-cni-336923" cluster and "default" namespace by default
	W1101 10:43:30.628520  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:32.629114  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:35.128812  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:37.629055  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:40.128800  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:42.628540  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:44.629264  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:47.129278  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:49.628739  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:52.129160  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	W1101 10:43:54.628303  380170 pod_ready.go:104] pod "coredns-66bc5c9577-v7tvt" is not "Ready", error: <nil>
	I1101 10:43:56.628389  380170 pod_ready.go:94] pod "coredns-66bc5c9577-v7tvt" is "Ready"
	I1101 10:43:56.628417  380170 pod_ready.go:86] duration metric: took 37.005101259s for pod "coredns-66bc5c9577-v7tvt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.630867  380170 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.634391  380170 pod_ready.go:94] pod "etcd-default-k8s-diff-port-433711" is "Ready"
	I1101 10:43:56.634415  380170 pod_ready.go:86] duration metric: took 3.522298ms for pod "etcd-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.636115  380170 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.639521  380170 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-433711" is "Ready"
	I1101 10:43:56.639545  380170 pod_ready.go:86] duration metric: took 3.405718ms for pod "kube-apiserver-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.641320  380170 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:56.826879  380170 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-433711" is "Ready"
	I1101 10:43:56.826906  380170 pod_ready.go:86] duration metric: took 185.570131ms for pod "kube-controller-manager-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:57.026875  380170 pod_ready.go:83] waiting for pod "kube-proxy-2g94q" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:57.426934  380170 pod_ready.go:94] pod "kube-proxy-2g94q" is "Ready"
	I1101 10:43:57.426962  380170 pod_ready.go:86] duration metric: took 400.060114ms for pod "kube-proxy-2g94q" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:57.627021  380170 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:58.027010  380170 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-433711" is "Ready"
	I1101 10:43:58.027037  380170 pod_ready.go:86] duration metric: took 399.991909ms for pod "kube-scheduler-default-k8s-diff-port-433711" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:43:58.027049  380170 pod_ready.go:40] duration metric: took 38.406562117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:43:58.070315  380170 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:43:58.071931  380170 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-433711" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:43:29 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:29.282517425Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 10:43:29 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:29.28629466Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 10:43:29 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:29.286324741Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.456538395Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4b530148-8fac-4460-a2a9-95ed4780bada name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.457458774Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5663b6b1-98e1-4869-baca-818204472644 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.458475933Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2/dashboard-metrics-scraper" id=701fd135-0f36-4cd1-a79c-c9fbd333bd4b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.458628347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.464239198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.464700481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.493710665Z" level=info msg="Created container ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2/dashboard-metrics-scraper" id=701fd135-0f36-4cd1-a79c-c9fbd333bd4b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.494205208Z" level=info msg="Starting container: ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c" id=ddf2ac11-c8ea-4bd2-80de-57d0237d6dcd name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.495939132Z" level=info msg="Started container" PID=1751 containerID=ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2/dashboard-metrics-scraper id=ddf2ac11-c8ea-4bd2-80de-57d0237d6dcd name=/runtime.v1.RuntimeService/StartContainer sandboxID=87a33bbb38ddef292d98ce33d11c877d2a45f773399f294f959e3e785a51b92f
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.561566546Z" level=info msg="Removing container: a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698" id=9496230b-d72b-41ec-973a-d9d86f659852 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:43:46 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:46.57023854Z" level=info msg="Removed container a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2/dashboard-metrics-scraper" id=9496230b-d72b-41ec-973a-d9d86f659852 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.571180042Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=08a7cbe2-cfe1-42ee-b76d-6d4d2fbf42da name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.572182335Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2b1ecfb3-6178-4859-9b57-0c7e8c869efd name=/runtime.v1.ImageService/ImageStatus
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.573280652Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=07018ef5-fda7-4ed8-8428-928b6e0ea4b4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.573417153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.578363314Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.578491153Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/1c51413804b9c85cd9f4509b1597fc4abe4a2f91694b8cefcac2151c0d08ce68/merged/etc/passwd: no such file or directory"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.578525543Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1c51413804b9c85cd9f4509b1597fc4abe4a2f91694b8cefcac2151c0d08ce68/merged/etc/group: no such file or directory"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.578773673Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.604950061Z" level=info msg="Created container fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f: kube-system/storage-provisioner/storage-provisioner" id=07018ef5-fda7-4ed8-8428-928b6e0ea4b4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.605509473Z" level=info msg="Starting container: fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f" id=32305199-fedb-4c4e-a646-157a9a8624b2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 10:43:49 default-k8s-diff-port-433711 crio[566]: time="2025-11-01T10:43:49.607402679Z" level=info msg="Started container" PID=1765 containerID=fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f description=kube-system/storage-provisioner/storage-provisioner id=32305199-fedb-4c4e-a646-157a9a8624b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae2c68ac30995f3ea9cd808bc8865f9030055807d5937530f59ead5dfcbe53b6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	fe923ee8c80e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   ae2c68ac30995       storage-provisioner                                    kube-system
	ff771c6cd6560       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   87a33bbb38dde       dashboard-metrics-scraper-6ffb444bf9-wrzq2             kubernetes-dashboard
	2627f9ff573b6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   5db16fe909cbb       kubernetes-dashboard-855c9754f9-fbhvp                  kubernetes-dashboard
	20518f3b36581       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   4c74bc691ead4       busybox                                                default
	f480d182aec70       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   83aa98f2cf102       coredns-66bc5c9577-v7tvt                               kube-system
	8bacee2ea78c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   ae2c68ac30995       storage-provisioner                                    kube-system
	56f78b23eff04       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   f216498248b7c       kube-proxy-2g94q                                       kube-system
	268037ed92509       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   b28c9886fe3eb       kindnet-f2zwl                                          kube-system
	a47e2aa79fa21       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   2650bf242359a       kube-apiserver-default-k8s-diff-port-433711            kube-system
	ee4bc7ee40943       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   2b0eefdc3fc56       kube-scheduler-default-k8s-diff-port-433711            kube-system
	ba242b116eb39       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   7e9788e3ad063       kube-controller-manager-default-k8s-diff-port-433711   kube-system
	8083c17bef04a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   78068b36711d0       etcd-default-k8s-diff-port-433711                      kube-system
	
	
	==> coredns [f480d182aec70d74af5c26d6b5649b3d392235fd29b5f2b0e869a42a8aab1142] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50940 - 29971 "HINFO IN 7321720219376195779.8126889959094793254. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031465491s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-433711
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-433711
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=default-k8s-diff-port-433711
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:41:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-433711
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:44:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:44:09 +0000   Sat, 01 Nov 2025 10:41:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:44:09 +0000   Sat, 01 Nov 2025 10:41:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:44:09 +0000   Sat, 01 Nov 2025 10:41:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:44:09 +0000   Sat, 01 Nov 2025 10:42:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-433711
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                e1d5f657-b6a1-42bf-b6a8-18744a9a0476
	  Boot ID:                    21cecb0b-2a7f-46d3-9ffb-636032844e0a
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-v7tvt                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-433711                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m25s
	  kube-system                 kindnet-f2zwl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-433711             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-433711    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-2g94q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-default-k8s-diff-port-433711             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wrzq2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fbhvp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m17s                  kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m28s (x8 over 2m28s)  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m28s (x8 over 2m28s)  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m28s (x8 over 2m28s)  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m19s                  node-controller  Node default-k8s-diff-port-433711 event: Registered Node default-k8s-diff-port-433711 in Controller
	  Normal  NodeReady                97s                    kubelet          Node default-k8s-diff-port-433711 status is now: NodeReady
	  Normal  Starting                 58s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)      kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)      kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)      kubelet          Node default-k8s-diff-port-433711 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                    node-controller  Node default-k8s-diff-port-433711 event: Registered Node default-k8s-diff-port-433711 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a b0 8a 91 8d 92 08 06
	[  +0.000330] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 86 69 e9 76 fc 89 08 06
	[ +36.842898] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[  +0.029414] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5a ea 60 3a a0 14 08 06
	[Nov 1 10:39] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[  +0.003104] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 bf d0 1c 89 85 08 06
	[ +16.331919] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 95 76 46 f7 b2 08 06
	[  +0.000529] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 62 5f 88 bd 02 41 08 06
	[ +22.535010] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 13 21 e4 71 81 08 06
	[  +0.000399] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 b2 21 61 6f 10 08 06
	[Nov 1 10:41] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ce db d5 61 d2 2d 08 06
	[  +0.000342] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 50 ec 89 c1 90 08 06
	[Nov 1 10:42] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000013] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [8083c17bef04a52cbc3835ee9f8f046af5ef91f84b3497be4940886ec319826a] <==
	{"level":"warn","ts":"2025-11-01T10:43:17.571156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.577626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.597572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.604807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.618241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.624450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.632519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.639218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.646622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.653757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.661127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.667855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.674791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.682823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.689904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.696808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.704892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.712537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.720141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.734697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.738103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.747029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.753805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:43:17.815577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54516","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:43:26.269986Z","caller":"traceutil/trace.go:172","msg":"trace[276127190] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"114.126473ms","start":"2025-11-01T10:43:26.155836Z","end":"2025-11-01T10:43:26.269962Z","steps":["trace[276127190] 'process raft request'  (duration: 110.940695ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:44:15 up  2:26,  0 user,  load average: 2.79, 3.60, 2.56
	Linux default-k8s-diff-port-433711 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [268037ed9250984f4892d07ede3dc1caa15abd0d0ee1e13d165836a8f5d56237] <==
	I1101 10:43:19.058140       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 10:43:19.060022       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 10:43:19.060290       1 main.go:148] setting mtu 1500 for CNI 
	I1101 10:43:19.060313       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 10:43:19.060342       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T10:43:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 10:43:19.263404       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 10:43:19.263443       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 10:43:19.263455       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 10:43:19.263589       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 10:43:19.663921       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 10:43:19.663945       1 metrics.go:72] Registering metrics
	I1101 10:43:19.664004       1 controller.go:711] "Syncing nftables rules"
	I1101 10:43:29.263754       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:43:29.263820       1 main.go:301] handling current node
	I1101 10:43:39.263764       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:43:39.263795       1 main.go:301] handling current node
	I1101 10:43:49.263201       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:43:49.263252       1 main.go:301] handling current node
	I1101 10:43:59.263162       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:43:59.263213       1 main.go:301] handling current node
	I1101 10:44:09.269395       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 10:44:09.269439       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a47e2aa79fa21a30c460b676774cdb84b1d8dccc92e263a4ff967b3e351c7284] <==
	I1101 10:43:18.353084       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:43:18.353095       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:43:18.353239       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:43:18.353248       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:43:18.353548       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:43:18.353653       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:43:18.353663       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:43:18.353788       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:43:18.353930       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:43:18.353983       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:43:18.354465       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:43:18.360295       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 10:43:18.366910       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:43:18.388893       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:43:18.502312       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:43:18.801316       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:43:18.891296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:43:18.917948       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:43:18.934545       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:43:18.990085       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.97.177"}
	I1101 10:43:19.005969       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.249.188"}
	I1101 10:43:19.257041       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:43:21.433229       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:43:21.484364       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:43:21.582993       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ba242b116eb3920e232ebbe1eb907d675c7b4d49cc536f95a10a281b6e468a77] <==
	I1101 10:43:21.060482       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:43:21.060541       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:43:21.060551       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:43:21.060556       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:43:21.062803       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:43:21.063969       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:43:21.071246       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:43:21.079645       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:43:21.079688       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:43:21.079724       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:43:21.079771       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:43:21.079895       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:43:21.079935       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:43:21.079978       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-433711"
	I1101 10:43:21.080027       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:43:21.080164       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:43:21.080256       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:43:21.080342       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:43:21.080863       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:43:21.080887       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:43:21.082146       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:43:21.082198       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:43:21.086173       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:43:21.088669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:43:21.098952       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [56f78b23eff04b03a0c09efc93f2ddd6f650e3db549d5fbe24b8463049729188] <==
	I1101 10:43:18.925611       1 server_linux.go:53] "Using iptables proxy"
	I1101 10:43:19.015383       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:43:19.116425       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:43:19.116456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 10:43:19.116542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:43:19.139090       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 10:43:19.139149       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:43:19.144744       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:43:19.145145       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:43:19.145187       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:19.146954       1 config.go:200] "Starting service config controller"
	I1101 10:43:19.146981       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:43:19.147034       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:43:19.147047       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:43:19.147066       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:43:19.147074       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:43:19.147102       1 config.go:309] "Starting node config controller"
	I1101 10:43:19.147111       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:43:19.247195       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:43:19.247196       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:43:19.247238       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:43:19.247305       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ee4bc7ee409435014537fb2e187082556b0eb41b0a940a43ed6a16f657936a76] <==
	I1101 10:43:17.512334       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:43:19.601001       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:43:19.601102       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:43:19.606182       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:43:19.606321       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:43:19.606284       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:43:19.606479       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:43:19.606258       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:19.606797       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:19.606681       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:43:19.606703       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:43:19.706778       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:43:19.706889       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:43:19.707045       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:43:21 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:21.817520     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fd3ea554-304d-4143-ab2e-461ce7d2077c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fbhvp\" (UID: \"fd3ea554-304d-4143-ab2e-461ce7d2077c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbhvp"
	Nov 01 10:43:21 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:21.817617     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54xb\" (UniqueName: \"kubernetes.io/projected/fd3ea554-304d-4143-ab2e-461ce7d2077c-kube-api-access-f54xb\") pod \"kubernetes-dashboard-855c9754f9-fbhvp\" (UID: \"fd3ea554-304d-4143-ab2e-461ce7d2077c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbhvp"
	Nov 01 10:43:24 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:24.499662     724 scope.go:117] "RemoveContainer" containerID="20f26770d6752c6319d3409fcf7d94ab1abe42f74db1d975151fab98527fa443"
	Nov 01 10:43:25 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:25.504939     724 scope.go:117] "RemoveContainer" containerID="20f26770d6752c6319d3409fcf7d94ab1abe42f74db1d975151fab98527fa443"
	Nov 01 10:43:25 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:25.505102     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:25 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:25.505300     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:43:26 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:26.400555     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:43:26 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:26.511042     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:26 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:26.511204     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:43:28 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:28.529517     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbhvp" podStartSLOduration=1.533801211 podStartE2EDuration="7.529479107s" podCreationTimestamp="2025-11-01 10:43:21 +0000 UTC" firstStartedPulling="2025-11-01 10:43:22.034019653 +0000 UTC m=+5.670022799" lastFinishedPulling="2025-11-01 10:43:28.029697546 +0000 UTC m=+11.665700695" observedRunningTime="2025-11-01 10:43:28.529477327 +0000 UTC m=+12.165480495" watchObservedRunningTime="2025-11-01 10:43:28.529479107 +0000 UTC m=+12.165482275"
	Nov 01 10:43:32 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:32.529748     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:32 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:32.529915     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:43:46 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:46.456055     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:46 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:46.560274     724 scope.go:117] "RemoveContainer" containerID="a8050faa78ce79e66d5d6470e834e1a6499bf559d497ec7aa7b0a0e7a90e2698"
	Nov 01 10:43:46 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:46.560523     724 scope.go:117] "RemoveContainer" containerID="ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	Nov 01 10:43:46 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:46.560728     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:43:49 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:49.570773     724 scope.go:117] "RemoveContainer" containerID="8bacee2ea78c4e9dab8336d76dde4c00d1dd3fae53ccb6cf5794cb16154e9f2d"
	Nov 01 10:43:52 default-k8s-diff-port-433711 kubelet[724]: I1101 10:43:52.530012     724 scope.go:117] "RemoveContainer" containerID="ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	Nov 01 10:43:52 default-k8s-diff-port-433711 kubelet[724]: E1101 10:43:52.530220     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:44:03 default-k8s-diff-port-433711 kubelet[724]: I1101 10:44:03.455761     724 scope.go:117] "RemoveContainer" containerID="ff771c6cd6560e134a264546d23b7e89dbe7f48e03975f70ccc08edcc12ef89c"
	Nov 01 10:44:03 default-k8s-diff-port-433711 kubelet[724]: E1101 10:44:03.456002     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wrzq2_kubernetes-dashboard(fc8d209f-d810-4662-8f93-fa4bbb2f139f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wrzq2" podUID="fc8d209f-d810-4662-8f93-fa4bbb2f139f"
	Nov 01 10:44:10 default-k8s-diff-port-433711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 10:44:10 default-k8s-diff-port-433711 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 10:44:10 default-k8s-diff-port-433711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 10:44:10 default-k8s-diff-port-433711 systemd[1]: kubelet.service: Consumed 1.636s CPU time.
	
	
	==> kubernetes-dashboard [2627f9ff573b62e65714ef8ce20547c7a9346b10aa430a1b15470b7601f6ba12] <==
	2025/11/01 10:43:28 Starting overwatch
	2025/11/01 10:43:28 Using namespace: kubernetes-dashboard
	2025/11/01 10:43:28 Using in-cluster config to connect to apiserver
	2025/11/01 10:43:28 Using secret token for csrf signing
	2025/11/01 10:43:28 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 10:43:28 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 10:43:28 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 10:43:28 Generating JWE encryption key
	2025/11/01 10:43:28 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 10:43:28 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 10:43:28 Initializing JWE encryption key from synchronized object
	2025/11/01 10:43:28 Creating in-cluster Sidecar client
	2025/11/01 10:43:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 10:43:28 Serving insecurely on HTTP port: 9090
	2025/11/01 10:43:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8bacee2ea78c4e9dab8336d76dde4c00d1dd3fae53ccb6cf5794cb16154e9f2d] <==
	I1101 10:43:18.881143       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 10:43:48.883898       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fe923ee8c80e5de71484349f6c918f286407bfeb10d8d39f3709e34cce2f633f] <==
	I1101 10:43:49.618986       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 10:43:49.625998       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 10:43:49.626046       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 10:43:49.628056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:53.083315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:43:57.343884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:00.942093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:03.995973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:07.018928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:07.024596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:44:07.024722       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:44:07.024865       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433711_0981853e-22c3-4990-aedb-5943cbfc8d42!
	I1101 10:44:07.024862       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"177ac40d-31f6-48f5-be20-6d54b17caa55", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-433711_0981853e-22c3-4990-aedb-5943cbfc8d42 became leader
	W1101 10:44:07.026758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:07.033667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:44:07.125148       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-433711_0981853e-22c3-4990-aedb-5943cbfc8d42!
	W1101 10:44:09.037258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:09.041546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:11.044622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:11.049440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:13.053312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:13.057162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:15.061331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:44:15.065762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711
E1101 10:44:15.577198   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/bridge-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711: exit status 2 (321.725906ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-433711 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.00s)

                                                
                                    

Test pass (263/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 14.05
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 12.74
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.8
22 TestOffline 80.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 150.28
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.41
48 TestAddons/StoppedEnableDisable 16.76
49 TestCertOptions 27.82
50 TestCertExpiration 212.81
52 TestForceSystemdFlag 25.28
53 TestForceSystemdEnv 25.41
58 TestErrorSpam/setup 20.17
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.96
61 TestErrorSpam/pause 5.47
62 TestErrorSpam/unpause 6.01
63 TestErrorSpam/stop 18.09
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.59
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.85
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.75
75 TestFunctional/serial/CacheCmd/cache/add_local 1.95
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 44.91
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.26
86 TestFunctional/serial/LogsFileCmd 1.26
87 TestFunctional/serial/InvalidService 3.96
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 9.97
91 TestFunctional/parallel/DryRun 0.38
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.09
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 24.43
101 TestFunctional/parallel/SSHCmd 0.7
102 TestFunctional/parallel/CpCmd 1.83
103 TestFunctional/parallel/MySQL 17.08
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.74
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
113 TestFunctional/parallel/License 0.53
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.71
119 TestFunctional/parallel/ImageCommands/Setup 1.95
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.26
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/Version/short 0.06
142 TestFunctional/parallel/Version/components 0.47
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
145 TestFunctional/parallel/ProfileCmd/profile_list 0.4
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
147 TestFunctional/parallel/MountCmd/any-port 7.61
148 TestFunctional/parallel/MountCmd/specific-port 2.01
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.52
150 TestFunctional/parallel/ServiceCmd/List 1.7
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 111.02
163 TestMultiControlPlane/serial/DeployApp 6.31
164 TestMultiControlPlane/serial/PingHostFromPods 1.02
165 TestMultiControlPlane/serial/AddWorkerNode 24.52
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
168 TestMultiControlPlane/serial/CopyFile 17.13
169 TestMultiControlPlane/serial/StopSecondaryNode 19.77
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.67
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 102.17
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.5
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 43.17
177 TestMultiControlPlane/serial/RestartCluster 58.61
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 42.41
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
185 TestJSONOutput/start/Command 40.76
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.96
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 38.24
211 TestKicCustomNetwork/use_default_bridge_network 23.69
212 TestKicExistingNetwork 23.43
213 TestKicCustomSubnet 24.84
214 TestKicStaticIP 25.93
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 48.83
219 TestMountStart/serial/StartWithMountFirst 6.15
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 5.46
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.98
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 65.23
231 TestMultiNode/serial/DeployApp2Nodes 4.7
232 TestMultiNode/serial/PingHostFrom2Pods 0.74
233 TestMultiNode/serial/AddNode 24.1
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.65
236 TestMultiNode/serial/CopyFile 9.64
237 TestMultiNode/serial/StopNode 2.24
238 TestMultiNode/serial/StartAfterStop 7.08
239 TestMultiNode/serial/RestartKeepsNodes 76.63
240 TestMultiNode/serial/DeleteNode 5.25
241 TestMultiNode/serial/StopMultiNode 28.38
242 TestMultiNode/serial/RestartMultiNode 27.9
243 TestMultiNode/serial/ValidateNameConflict 23.66
248 TestPreload 94.14
250 TestScheduledStopUnix 96.99
253 TestInsufficientStorage 9.57
254 TestRunningBinaryUpgrade 56.51
256 TestKubernetesUpgrade 313.86
257 TestMissingContainerUpgrade 113.86
258 TestStoppedBinaryUpgrade/Setup 3.77
259 TestStoppedBinaryUpgrade/Upgrade 77.79
263 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
268 TestNetworkPlugins/group/false 3.53
280 TestPause/serial/Start 44.63
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
283 TestNoKubernetes/serial/StartWithK8s 21.49
284 TestNoKubernetes/serial/StartWithStopK8s 17.29
285 TestPause/serial/SecondStartNoReconfiguration 6.14
287 TestNoKubernetes/serial/Start 7.75
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
289 TestNoKubernetes/serial/ProfileList 32.19
290 TestNoKubernetes/serial/Stop 1.3
291 TestNoKubernetes/serial/StartNoArgs 7.69
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
293 TestNetworkPlugins/group/auto/Start 41.12
294 TestNetworkPlugins/group/flannel/Start 45.7
295 TestNetworkPlugins/group/auto/KubeletFlags 0.3
296 TestNetworkPlugins/group/auto/NetCatPod 9.21
297 TestNetworkPlugins/group/flannel/ControllerPod 6.01
298 TestNetworkPlugins/group/auto/DNS 0.11
299 TestNetworkPlugins/group/auto/Localhost 0.09
300 TestNetworkPlugins/group/auto/HairPin 0.09
301 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
302 TestNetworkPlugins/group/flannel/NetCatPod 7.19
303 TestNetworkPlugins/group/flannel/DNS 0.12
304 TestNetworkPlugins/group/flannel/Localhost 0.09
305 TestNetworkPlugins/group/flannel/HairPin 0.09
306 TestNetworkPlugins/group/enable-default-cni/Start 67.71
307 TestNetworkPlugins/group/bridge/Start 35.97
308 TestNetworkPlugins/group/calico/Start 50.01
309 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
310 TestNetworkPlugins/group/bridge/NetCatPod 7.18
311 TestNetworkPlugins/group/bridge/DNS 0.11
312 TestNetworkPlugins/group/bridge/Localhost 0.09
313 TestNetworkPlugins/group/bridge/HairPin 0.09
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.24
316 TestNetworkPlugins/group/kindnet/Start 41.97
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
320 TestNetworkPlugins/group/calico/ControllerPod 6.01
321 TestNetworkPlugins/group/calico/KubeletFlags 0.35
322 TestNetworkPlugins/group/calico/NetCatPod 8.28
323 TestNetworkPlugins/group/custom-flannel/Start 52.24
324 TestNetworkPlugins/group/calico/DNS 0.16
325 TestNetworkPlugins/group/calico/Localhost 0.13
326 TestNetworkPlugins/group/calico/HairPin 0.11
328 TestStartStop/group/old-k8s-version/serial/FirstStart 50.89
329 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
330 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
331 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
333 TestStartStop/group/no-preload/serial/FirstStart 53.59
334 TestNetworkPlugins/group/kindnet/DNS 0.2
335 TestNetworkPlugins/group/kindnet/Localhost 0.1
336 TestNetworkPlugins/group/kindnet/HairPin 0.09
337 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
338 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
340 TestStartStop/group/embed-certs/serial/FirstStart 40.12
341 TestStartStop/group/old-k8s-version/serial/DeployApp 10.31
342 TestNetworkPlugins/group/custom-flannel/DNS 0.14
343 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
344 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
346 TestStartStop/group/old-k8s-version/serial/Stop 16.85
348 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.91
349 TestStartStop/group/no-preload/serial/DeployApp 9.24
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
351 TestStartStop/group/old-k8s-version/serial/SecondStart 51.07
353 TestStartStop/group/no-preload/serial/Stop 16.29
354 TestStartStop/group/embed-certs/serial/DeployApp 10.24
356 TestStartStop/group/embed-certs/serial/Stop 17.16
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
358 TestStartStop/group/no-preload/serial/SecondStart 28.62
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/embed-certs/serial/SecondStart 46.45
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.07
365 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
367 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
371 TestStartStop/group/newest-cni/serial/FirstStart 24.42
373 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.31
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
376 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
379 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.03
380 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
382 TestStartStop/group/newest-cni/serial/Stop 8.16
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
384 TestStartStop/group/newest-cni/serial/SecondStart 12.34
385 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (14.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-606362 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-606362 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.051502419s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (14.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 09:54:32.491572   61522 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 09:54:32.491662   61522 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-606362
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-606362: exit status 85 (69.311951ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-606362 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-606362 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:54:18
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:54:18.492282   61534 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:54:18.492563   61534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:18.492574   61534 out.go:374] Setting ErrFile to fd 2...
	I1101 09:54:18.492579   61534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:18.492828   61534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	W1101 09:54:18.492956   61534 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21830-58021/.minikube/config/config.json: open /home/jenkins/minikube-integration/21830-58021/.minikube/config/config.json: no such file or directory
	I1101 09:54:18.493514   61534 out.go:368] Setting JSON to true
	I1101 09:54:18.494463   61534 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5798,"bootTime":1761985060,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:54:18.494569   61534 start.go:143] virtualization: kvm guest
	I1101 09:54:18.496615   61534 out.go:99] [download-only-606362] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1101 09:54:18.496775   61534 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 09:54:18.496789   61534 notify.go:221] Checking for updates...
	I1101 09:54:18.497929   61534 out.go:171] MINIKUBE_LOCATION=21830
	I1101 09:54:18.499130   61534 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:54:18.500249   61534 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 09:54:18.501213   61534 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 09:54:18.502384   61534 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 09:54:18.504429   61534 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:54:18.504683   61534 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:54:18.526744   61534 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:54:18.526898   61534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:54:18.852946   61534 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-01 09:54:18.843068154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:54:18.853058   61534 docker.go:319] overlay module found
	I1101 09:54:18.854484   61534 out.go:99] Using the docker driver based on user configuration
	I1101 09:54:18.854544   61534 start.go:309] selected driver: docker
	I1101 09:54:18.854556   61534 start.go:930] validating driver "docker" against <nil>
	I1101 09:54:18.854651   61534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:54:18.914171   61534 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-01 09:54:18.905568656 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:54:18.914372   61534 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:54:18.914891   61534 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1101 09:54:18.915097   61534 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:54:18.916743   61534 out.go:171] Using Docker driver with root privileges
	I1101 09:54:18.917768   61534 cni.go:84] Creating CNI manager for ""
	I1101 09:54:18.917850   61534 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:54:18.917866   61534 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:54:18.917964   61534 start.go:353] cluster config:
	{Name:download-only-606362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-606362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:54:18.919209   61534 out.go:99] Starting "download-only-606362" primary control-plane node in "download-only-606362" cluster
	I1101 09:54:18.919226   61534 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:54:18.920296   61534 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:54:18.920326   61534 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:54:18.920430   61534 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:54:18.936221   61534 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:54:18.936456   61534 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:54:18.936574   61534 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:54:19.030320   61534 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:54:19.030351   61534 cache.go:59] Caching tarball of preloaded images
	I1101 09:54:19.030560   61534 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:54:19.032396   61534 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 09:54:19.032410   61534 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 09:54:19.150831   61534 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1101 09:54:19.150955   61534 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-606362 host does not exist
	  To start a cluster, run: "minikube start -p download-only-606362"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-606362
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-712511 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-712511 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.744688743s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 09:54:45.653802   61522 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:54:45.653864   61522 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-712511
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-712511: exit status 85 (71.146799ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-606362 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-606362 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ delete  │ -p download-only-606362                                                                                                                                                   │ download-only-606362 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ start   │ -o=json --download-only -p download-only-712511 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-712511 │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:54:32
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:54:32.960910   61926 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:54:32.961023   61926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:32.961032   61926 out.go:374] Setting ErrFile to fd 2...
	I1101 09:54:32.961036   61926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:54:32.961249   61926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 09:54:32.961697   61926 out.go:368] Setting JSON to true
	I1101 09:54:32.962564   61926 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5813,"bootTime":1761985060,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:54:32.962651   61926 start.go:143] virtualization: kvm guest
	I1101 09:54:32.964393   61926 out.go:99] [download-only-712511] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:54:32.964543   61926 notify.go:221] Checking for updates...
	I1101 09:54:32.965767   61926 out.go:171] MINIKUBE_LOCATION=21830
	I1101 09:54:32.968044   61926 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:54:32.969122   61926 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 09:54:32.970234   61926 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 09:54:32.971277   61926 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 09:54:32.973109   61926 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:54:32.973332   61926 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:54:32.994953   61926 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:54:32.995066   61926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:54:33.052064   61926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-01 09:54:33.042751363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:54:33.052207   61926 docker.go:319] overlay module found
	I1101 09:54:33.053525   61926 out.go:99] Using the docker driver based on user configuration
	I1101 09:54:33.053552   61926 start.go:309] selected driver: docker
	I1101 09:54:33.053559   61926 start.go:930] validating driver "docker" against <nil>
	I1101 09:54:33.053661   61926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:54:33.111801   61926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-01 09:54:33.102364448 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:54:33.111988   61926 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:54:33.112470   61926 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1101 09:54:33.112626   61926 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:54:33.114438   61926 out.go:171] Using Docker driver with root privileges
	I1101 09:54:33.115436   61926 cni.go:84] Creating CNI manager for ""
	I1101 09:54:33.115506   61926 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:54:33.115519   61926 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:54:33.115591   61926 start.go:353] cluster config:
	{Name:download-only-712511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-712511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:54:33.116860   61926 out.go:99] Starting "download-only-712511" primary control-plane node in "download-only-712511" cluster
	I1101 09:54:33.116890   61926 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:54:33.117872   61926 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:54:33.117893   61926 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:54:33.117959   61926 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:54:33.133510   61926 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 09:54:33.133655   61926 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 09:54:33.133677   61926 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 09:54:33.133686   61926 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 09:54:33.133696   61926 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 09:54:33.541688   61926 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:54:33.541753   61926 cache.go:59] Caching tarball of preloaded images
	I1101 09:54:33.541924   61926 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:54:33.543533   61926 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1101 09:54:33.543549   61926 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 09:54:33.661379   61926 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1101 09:54:33.661440   61926 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21830-58021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-712511 host does not exist
	  To start a cluster, run: "minikube start -p download-only-712511"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-712511
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-647034 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-647034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-647034
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 09:54:46.752663   61522 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-031328 --alsologtostderr --binary-mirror http://127.0.0.1:41345 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-031328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-031328
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (80.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-797273 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-797273 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m17.779026786s)
helpers_test.go:175: Cleaning up "offline-crio-797273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-797273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-797273: (2.51641288s)
--- PASS: TestOffline (80.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-407417
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-407417: exit status 85 (62.878792ms)

                                                
                                                
-- stdout --
	* Profile "addons-407417" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-407417"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-407417
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-407417: exit status 85 (63.905084ms)

                                                
                                                
-- stdout --
	* Profile "addons-407417" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-407417"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (150.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-407417 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-407417 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m30.277260965s)
--- PASS: TestAddons/Setup (150.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-407417 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-407417 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-407417 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-407417 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [99a6b686-6484-4836-a66c-e292ed6386c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [99a6b686-6484-4836-a66c-e292ed6386c7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004045458s
addons_test.go:694: (dbg) Run:  kubectl --context addons-407417 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-407417 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-407417 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.76s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-407417
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-407417: (16.468607072s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-407417
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-407417
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-407417
--- PASS: TestAddons/StoppedEnableDisable (16.76s)

                                                
                                    
x
+
TestCertOptions (27.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-000979 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-000979 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.673371831s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-000979 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-000979 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-000979 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-000979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-000979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-000979: (2.451803912s)
--- PASS: TestCertOptions (27.82s)

                                                
                                    
x
+
TestCertExpiration (212.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-908735 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-908735 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.03043143s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-908735 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-908735 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.462015384s)
helpers_test.go:175: Cleaning up "cert-expiration-908735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-908735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-908735: (3.312244668s)
--- PASS: TestCertExpiration (212.81s)

                                                
                                    
x
+
TestForceSystemdFlag (25.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-841776 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-841776 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.565251627s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-841776 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-841776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-841776
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-841776: (2.433713655s)
--- PASS: TestForceSystemdFlag (25.28s)

                                                
                                    
x
+
TestForceSystemdEnv (25.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-943866 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-943866 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.994975305s)
helpers_test.go:175: Cleaning up "force-systemd-env-943866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-943866
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-943866: (2.410822361s)
--- PASS: TestForceSystemdEnv (25.41s)

                                                
                                    
x
+
TestErrorSpam/setup (20.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-732166 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-732166 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-732166 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-732166 --driver=docker  --container-runtime=crio: (20.173586302s)
--- PASS: TestErrorSpam/setup (20.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (5.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 pause: exit status 80 (1.604316167s)

                                                
                                                
-- stdout --
	* Pausing node nospam-732166 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:00:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 pause: exit status 80 (1.961084269s)

                                                
                                                
-- stdout --
	* Pausing node nospam-732166 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:00:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 pause: exit status 80 (1.905593871s)

                                                
                                                
-- stdout --
	* Pausing node nospam-732166 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:00:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.01s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 unpause: exit status 80 (1.846165837s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-732166 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:01:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 unpause: exit status 80 (2.165105821s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-732166 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:01:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 unpause: exit status 80 (1.995164986s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-732166 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T10:01:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.01s)

                                                
                                    
x
+
TestErrorSpam/stop (18.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 stop: (17.883618549s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-732166 --log_dir /tmp/nospam-732166 stop
--- PASS: TestErrorSpam/stop (18.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21830-58021/.minikube/files/etc/test/nested/copy/61522/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-638125 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-638125 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.589035972s)
--- PASS: TestFunctional/serial/StartWithProxy (38.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 10:02:05.706855   61522 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-638125 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-638125 --alsologtostderr -v=8: (6.848776005s)
functional_test.go:678: soft start took 6.849667185s for "functional-638125" cluster.
I1101 10:02:12.555997   61522 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-638125 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-638125 cache add registry.k8s.io/pause:3.3: (1.004520673s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-638125 /tmp/TestFunctionalserialCacheCmdcacheadd_local2034217874/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 cache add minikube-local-cache-test:functional-638125
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-638125 cache add minikube-local-cache-test:functional-638125: (1.625004194s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 cache delete minikube-local-cache-test:functional-638125
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-638125
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.150396ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 cache reload
E1101 10:02:18.445219   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:18.451637   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:18.463022   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:18.484426   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:18.525838   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:18.607290   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:18.768804   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1101 10:02:19.090432   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 kubectl -- --context functional-638125 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-638125 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-638125 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 10:02:19.731898   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:21.013584   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:23.575635   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:28.697097   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:38.939295   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:02:59.420761   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-638125 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.906491811s)
functional_test.go:776: restart took 44.906630821s for "functional-638125" cluster.
I1101 10:03:04.624465   61522 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (44.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-638125 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-638125 logs: (1.25914355s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 logs --file /tmp/TestFunctionalserialLogsFileCmd4016844392/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-638125 logs --file /tmp/TestFunctionalserialLogsFileCmd4016844392/001/logs.txt: (1.25856588s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-638125 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-638125
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-638125: exit status 115 (349.869812ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32109 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-638125 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 config get cpus: exit status 14 (71.962425ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 config get cpus: exit status 14 (64.383963ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-638125 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-638125 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 101416: os: process already finished
E1101 10:05:02.304406   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:18.445047   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:46.146653   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:18.445411   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (9.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-638125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-638125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (163.3493ms)

                                                
                                                
-- stdout --
	* [functional-638125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:03:42.457605  100711 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:03:42.457692  100711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:42.457700  100711 out.go:374] Setting ErrFile to fd 2...
	I1101 10:03:42.457704  100711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:42.457915  100711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:03:42.458338  100711 out.go:368] Setting JSON to false
	I1101 10:03:42.459508  100711 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6362,"bootTime":1761985060,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:03:42.459641  100711 start.go:143] virtualization: kvm guest
	I1101 10:03:42.461681  100711 out.go:179] * [functional-638125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:03:42.462822  100711 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:03:42.462832  100711 notify.go:221] Checking for updates...
	I1101 10:03:42.464869  100711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:03:42.465974  100711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:03:42.467130  100711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:03:42.468226  100711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:03:42.469304  100711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:03:42.470663  100711 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:03:42.471159  100711 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:03:42.495099  100711 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:03:42.495276  100711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:03:42.555465  100711 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:03:42.545224459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:03:42.555609  100711 docker.go:319] overlay module found
	I1101 10:03:42.557958  100711 out.go:179] * Using the docker driver based on existing profile
	I1101 10:03:42.558935  100711 start.go:309] selected driver: docker
	I1101 10:03:42.558951  100711 start.go:930] validating driver "docker" against &{Name:functional-638125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-638125 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:03:42.559073  100711 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:03:42.560749  100711 out.go:203] 
	W1101 10:03:42.561679  100711 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 10:03:42.562629  100711 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-638125 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-638125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-638125 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (160.683716ms)

                                                
                                                
-- stdout --
	* [functional-638125] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:03:43.384699  101155 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:03:43.384958  101155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:43.384969  101155 out.go:374] Setting ErrFile to fd 2...
	I1101 10:03:43.384974  101155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:03:43.385307  101155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:03:43.385746  101155 out.go:368] Setting JSON to false
	I1101 10:03:43.386785  101155 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6363,"bootTime":1761985060,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:03:43.386878  101155 start.go:143] virtualization: kvm guest
	I1101 10:03:43.388829  101155 out.go:179] * [functional-638125] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 10:03:43.389971  101155 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:03:43.390003  101155 notify.go:221] Checking for updates...
	I1101 10:03:43.391947  101155 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:03:43.392981  101155 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:03:43.394009  101155 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:03:43.395006  101155 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:03:43.396315  101155 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:03:43.397932  101155 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:03:43.398696  101155 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:03:43.422410  101155 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:03:43.422569  101155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:03:43.478341  101155 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 10:03:43.469009252 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:03:43.478440  101155 docker.go:319] overlay module found
	I1101 10:03:43.480045  101155 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 10:03:43.481111  101155 start.go:309] selected driver: docker
	I1101 10:03:43.481125  101155 start.go:930] validating driver "docker" against &{Name:functional-638125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-638125 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:03:43.481211  101155 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:03:43.482896  101155 out.go:203] 
	W1101 10:03:43.483916  101155 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 10:03:43.484925  101155 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f24375ba-3c81-4208-b6f7-85b862ebd25d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003420538s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-638125 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-638125 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-638125 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-638125 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [73ebb49f-fbc8-4715-848c-7bd6fa546291] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [73ebb49f-fbc8-4715-848c-7bd6fa546291] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003872795s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-638125 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-638125 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-638125 apply -f testdata/storage-provisioner/pod.yaml
I1101 10:03:37.067211   61522 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9fa1891b-68e7-4555-852d-8c2d945de7d4] Pending
helpers_test.go:352: "sp-pod" [9fa1891b-68e7-4555-852d-8c2d945de7d4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9fa1891b-68e7-4555-852d-8c2d945de7d4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004050352s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-638125 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh -n functional-638125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 cp functional-638125:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2641981137/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh -n functional-638125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh -n functional-638125 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-638125 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-6j8zb" [271b33a7-ebe9-435b-aa75-259c35a5cd77] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-6j8zb" [271b33a7-ebe9-435b-aa75-259c35a5cd77] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.00375828s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-638125 exec mysql-5bb876957f-6j8zb -- mysql -ppassword -e "show databases;"
I1101 10:03:26.079577   61522 detect.go:223] nested VM detected
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-638125 exec mysql-5bb876957f-6j8zb -- mysql -ppassword -e "show databases;": exit status 1 (93.882183ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 10:03:26.158712   61522 retry.go:31] will retry after 798.900975ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-638125 exec mysql-5bb876957f-6j8zb -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-638125 exec mysql-5bb876957f-6j8zb -- mysql -ppassword -e "show databases;": exit status 1 (89.888021ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 10:03:27.048426   61522 retry.go:31] will retry after 1.806944901s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-638125 exec mysql-5bb876957f-6j8zb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/61522/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo cat /etc/test/nested/copy/61522/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/61522.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo cat /etc/ssl/certs/61522.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/61522.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo cat /usr/share/ca-certificates/61522.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/615222.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo cat /etc/ssl/certs/615222.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/615222.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo cat /usr/share/ca-certificates/615222.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-638125 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 ssh "sudo systemctl is-active docker": exit status 1 (269.94046ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 ssh "sudo systemctl is-active containerd": exit status 1 (275.100232ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-638125 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-638125 image ls --format short --alsologtostderr:
I1101 10:03:45.638678  101714 out.go:360] Setting OutFile to fd 1 ...
I1101 10:03:45.638912  101714 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:45.638920  101714 out.go:374] Setting ErrFile to fd 2...
I1101 10:03:45.638924  101714 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:45.639138  101714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
I1101 10:03:45.639677  101714 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:45.639790  101714 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:45.640170  101714 cli_runner.go:164] Run: docker container inspect functional-638125 --format={{.State.Status}}
I1101 10:03:45.657438  101714 ssh_runner.go:195] Run: systemctl --version
I1101 10:03:45.657508  101714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638125
I1101 10:03:45.675188  101714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/functional-638125/id_rsa Username:docker}
I1101 10:03:45.775465  101714 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-638125 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 9d0e6f6199dcb │ 155MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-638125 image ls --format table --alsologtostderr:
I1101 10:03:46.101678  101840 out.go:360] Setting OutFile to fd 1 ...
I1101 10:03:46.101983  101840 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:46.101997  101840 out.go:374] Setting ErrFile to fd 2...
I1101 10:03:46.102003  101840 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:46.102275  101840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
I1101 10:03:46.103114  101840 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:46.103252  101840 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:46.103851  101840 cli_runner.go:164] Run: docker container inspect functional-638125 --format={{.State.Status}}
I1101 10:03:46.127326  101840 ssh_runner.go:195] Run: systemctl --version
I1101 10:03:46.127377  101840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638125
I1101 10:03:46.150301  101840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/functional-638125/id_rsa Username:docker}
I1101 10:03:46.257162  101840 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-638125 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c
82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd
:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340e
ce6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/my
sql:5.7"],"size":"519571821"},{"id":"9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec","repoDigests":["docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed
4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-638125 image ls --format json --alsologtostderr:
I1101 10:03:45.867761  101784 out.go:360] Setting OutFile to fd 1 ...
I1101 10:03:45.867864  101784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:45.867873  101784 out.go:374] Setting ErrFile to fd 2...
I1101 10:03:45.867876  101784 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:45.868076  101784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
I1101 10:03:45.868654  101784 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:45.868757  101784 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:45.869144  101784 cli_runner.go:164] Run: docker container inspect functional-638125 --format={{.State.Status}}
I1101 10:03:45.887170  101784 ssh_runner.go:195] Run: systemctl --version
I1101 10:03:45.887214  101784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638125
I1101 10:03:45.905481  101784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/functional-638125/id_rsa Username:docker}
I1101 10:03:46.003370  101784 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-638125 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec
repoDigests:
- docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-638125 image ls --format yaml --alsologtostderr:
I1101 10:03:46.364079  101897 out.go:360] Setting OutFile to fd 1 ...
I1101 10:03:46.364337  101897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:46.364348  101897 out.go:374] Setting ErrFile to fd 2...
I1101 10:03:46.364354  101897 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:46.364580  101897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
I1101 10:03:46.365185  101897 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:46.365338  101897 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:46.365900  101897 cli_runner.go:164] Run: docker container inspect functional-638125 --format={{.State.Status}}
I1101 10:03:46.387782  101897 ssh_runner.go:195] Run: systemctl --version
I1101 10:03:46.387851  101897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638125
I1101 10:03:46.408530  101897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/functional-638125/id_rsa Username:docker}
I1101 10:03:46.515527  101897 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 ssh pgrep buildkitd: exit status 1 (269.985288ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image build -t localhost/my-image:functional-638125 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-638125 image build -t localhost/my-image:functional-638125 testdata/build --alsologtostderr: (4.222728344s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-638125 image build -t localhost/my-image:functional-638125 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7600435c296
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-638125
--> 1e5ef704c27
Successfully tagged localhost/my-image:functional-638125
1e5ef704c2798e87e758320c7013b04cb36c3aa96328fe0d8c7c53b5e6dba833
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-638125 image build -t localhost/my-image:functional-638125 testdata/build --alsologtostderr:
I1101 10:03:46.909607  102096 out.go:360] Setting OutFile to fd 1 ...
I1101 10:03:46.909869  102096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:46.909880  102096 out.go:374] Setting ErrFile to fd 2...
I1101 10:03:46.909884  102096 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:03:46.910076  102096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
I1101 10:03:46.910657  102096 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:46.911264  102096 config.go:182] Loaded profile config "functional-638125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:03:46.911698  102096 cli_runner.go:164] Run: docker container inspect functional-638125 --format={{.State.Status}}
I1101 10:03:46.929648  102096 ssh_runner.go:195] Run: systemctl --version
I1101 10:03:46.929695  102096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-638125
I1101 10:03:46.946337  102096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/functional-638125/id_rsa Username:docker}
I1101 10:03:47.046134  102096 build_images.go:162] Building image from path: /tmp/build.3409593825.tar
I1101 10:03:47.046212  102096 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 10:03:47.054152  102096 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3409593825.tar
I1101 10:03:47.057586  102096 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3409593825.tar: stat -c "%s %y" /var/lib/minikube/build/build.3409593825.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3409593825.tar': No such file or directory
I1101 10:03:47.057610  102096 ssh_runner.go:362] scp /tmp/build.3409593825.tar --> /var/lib/minikube/build/build.3409593825.tar (3072 bytes)
I1101 10:03:47.075095  102096 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3409593825
I1101 10:03:47.082344  102096 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3409593825 -xf /var/lib/minikube/build/build.3409593825.tar
I1101 10:03:47.089762  102096 crio.go:315] Building image: /var/lib/minikube/build/build.3409593825
I1101 10:03:47.089809  102096 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-638125 /var/lib/minikube/build/build.3409593825 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1101 10:03:51.052132  102096 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-638125 /var/lib/minikube/build/build.3409593825 --cgroup-manager=cgroupfs: (3.962298041s)
I1101 10:03:51.052199  102096 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3409593825
I1101 10:03:51.060747  102096 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3409593825.tar
I1101 10:03:51.068247  102096 build_images.go:218] Built localhost/my-image:functional-638125 from /tmp/build.3409593825.tar
I1101 10:03:51.068292  102096 build_images.go:134] succeeded building to: functional-638125
I1101 10:03:51.068296  102096 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.929066395s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-638125
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-638125 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-638125 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-638125 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 95705: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-638125 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-638125 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-638125 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [0868fc5a-4003-46e4-aa29-6bdaf9b46e78] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [0868fc5a-4003-46e4-aa29-6bdaf9b46e78] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.004228433s
I1101 10:03:28.194643   61522 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image rm kicbase/echo-server:functional-638125 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-638125 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.29.69 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-638125 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 version -o=json --components
2025/11/01 10:03:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "337.261177ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.047644ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "330.418953ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.229285ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdany-port1118079721/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761991411274490703" to /tmp/TestFunctionalparallelMountCmdany-port1118079721/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761991411274490703" to /tmp/TestFunctionalparallelMountCmdany-port1118079721/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761991411274490703" to /tmp/TestFunctionalparallelMountCmdany-port1118079721/001/test-1761991411274490703
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.222644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:03:31.552980   61522 retry.go:31] will retry after 388.431261ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 10:03 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 10:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 10:03 test-1761991411274490703
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh cat /mount-9p/test-1761991411274490703
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-638125 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0b8672bb-0bd5-442f-9698-33eb57b3f471] Pending
helpers_test.go:352: "busybox-mount" [0b8672bb-0bd5-442f-9698-33eb57b3f471] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0b8672bb-0bd5-442f-9698-33eb57b3f471] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0b8672bb-0bd5-442f-9698-33eb57b3f471] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005060792s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-638125 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdany-port1118079721/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdspecific-port329698808/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.940922ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:03:39.161888   61522 retry.go:31] will retry after 712.77485ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh -- ls -la /mount-9p
E1101 10:03:40.382091   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdspecific-port329698808/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 ssh "sudo umount -f /mount-9p": exit status 1 (265.823158ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-638125 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdspecific-port329698808/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup137166997/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup137166997/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup137166997/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T" /mount1: exit status 1 (339.225491ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:03:41.228744   61522 retry.go:31] will retry after 314.605306ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-638125 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup137166997/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup137166997/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-638125 /tmp/TestFunctionalparallelMountCmdVerifyCleanup137166997/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-638125 service list: (1.697112707s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-638125 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-638125 service list -o json: (1.695117751s)
functional_test.go:1504: Took "1.69520961s" to run "out/minikube-linux-amd64 -p functional-638125 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-638125
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-638125
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-638125
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (111.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m50.308335906s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (111.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 kubectl -- rollout status deployment/busybox: (4.389336239s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-md5h9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-qph8b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-v7js9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-md5h9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-qph8b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-v7js9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-md5h9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-qph8b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-v7js9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-md5h9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-md5h9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-qph8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-qph8b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-v7js9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 kubectl -- exec busybox-7b57f96db7-v7js9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 node add --alsologtostderr -v 5: (23.598697039s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-241510 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp testdata/cp-test.txt ha-241510:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2293066237/001/cp-test_ha-241510.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510:/home/docker/cp-test.txt ha-241510-m02:/home/docker/cp-test_ha-241510_ha-241510-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m02 "sudo cat /home/docker/cp-test_ha-241510_ha-241510-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510:/home/docker/cp-test.txt ha-241510-m03:/home/docker/cp-test_ha-241510_ha-241510-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m03 "sudo cat /home/docker/cp-test_ha-241510_ha-241510-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510:/home/docker/cp-test.txt ha-241510-m04:/home/docker/cp-test_ha-241510_ha-241510-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m04 "sudo cat /home/docker/cp-test_ha-241510_ha-241510-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp testdata/cp-test.txt ha-241510-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2293066237/001/cp-test_ha-241510-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m02:/home/docker/cp-test.txt ha-241510:/home/docker/cp-test_ha-241510-m02_ha-241510.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510 "sudo cat /home/docker/cp-test_ha-241510-m02_ha-241510.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m02:/home/docker/cp-test.txt ha-241510-m03:/home/docker/cp-test_ha-241510-m02_ha-241510-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m03 "sudo cat /home/docker/cp-test_ha-241510-m02_ha-241510-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m02:/home/docker/cp-test.txt ha-241510-m04:/home/docker/cp-test_ha-241510-m02_ha-241510-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m04 "sudo cat /home/docker/cp-test_ha-241510-m02_ha-241510-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp testdata/cp-test.txt ha-241510-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2293066237/001/cp-test_ha-241510-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m03:/home/docker/cp-test.txt ha-241510:/home/docker/cp-test_ha-241510-m03_ha-241510.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510 "sudo cat /home/docker/cp-test_ha-241510-m03_ha-241510.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m03:/home/docker/cp-test.txt ha-241510-m02:/home/docker/cp-test_ha-241510-m03_ha-241510-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m02 "sudo cat /home/docker/cp-test_ha-241510-m03_ha-241510-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m03:/home/docker/cp-test.txt ha-241510-m04:/home/docker/cp-test_ha-241510-m03_ha-241510-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m04 "sudo cat /home/docker/cp-test_ha-241510-m03_ha-241510-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp testdata/cp-test.txt ha-241510-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2293066237/001/cp-test_ha-241510-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m04:/home/docker/cp-test.txt ha-241510:/home/docker/cp-test_ha-241510-m04_ha-241510.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510 "sudo cat /home/docker/cp-test_ha-241510-m04_ha-241510.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m04:/home/docker/cp-test.txt ha-241510-m02:/home/docker/cp-test_ha-241510-m04_ha-241510-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m02 "sudo cat /home/docker/cp-test_ha-241510-m04_ha-241510-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 cp ha-241510-m04:/home/docker/cp-test.txt ha-241510-m03:/home/docker/cp-test_ha-241510-m04_ha-241510-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 ssh -n ha-241510-m03 "sudo cat /home/docker/cp-test_ha-241510-m04_ha-241510-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 node stop m02 --alsologtostderr -v 5: (19.077105393s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5: exit status 7 (690.638662ms)

                                                
                                                
-- stdout --
	ha-241510
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-241510-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-241510-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-241510-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:16:40.190338  126464 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:16:40.190595  126464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:40.190605  126464 out.go:374] Setting ErrFile to fd 2...
	I1101 10:16:40.190611  126464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:16:40.190819  126464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:16:40.191029  126464 out.go:368] Setting JSON to false
	I1101 10:16:40.191057  126464 mustload.go:66] Loading cluster: ha-241510
	I1101 10:16:40.191183  126464 notify.go:221] Checking for updates...
	I1101 10:16:40.191460  126464 config.go:182] Loaded profile config "ha-241510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:16:40.191477  126464 status.go:174] checking status of ha-241510 ...
	I1101 10:16:40.193098  126464 cli_runner.go:164] Run: docker container inspect ha-241510 --format={{.State.Status}}
	I1101 10:16:40.214556  126464 status.go:371] ha-241510 host status = "Running" (err=<nil>)
	I1101 10:16:40.214585  126464 host.go:66] Checking if "ha-241510" exists ...
	I1101 10:16:40.214919  126464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-241510
	I1101 10:16:40.234177  126464 host.go:66] Checking if "ha-241510" exists ...
	I1101 10:16:40.234417  126464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:40.234454  126464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-241510
	I1101 10:16:40.251696  126464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/ha-241510/id_rsa Username:docker}
	I1101 10:16:40.347848  126464 ssh_runner.go:195] Run: systemctl --version
	I1101 10:16:40.354054  126464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:40.366289  126464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:16:40.423534  126464 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-01 10:16:40.413394418 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:16:40.424059  126464 kubeconfig.go:125] found "ha-241510" server: "https://192.168.49.254:8443"
	I1101 10:16:40.424094  126464 api_server.go:166] Checking apiserver status ...
	I1101 10:16:40.424137  126464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:40.436391  126464 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup
	W1101 10:16:40.445598  126464 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:40.445664  126464 ssh_runner.go:195] Run: ls
	I1101 10:16:40.450317  126464 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 10:16:40.454218  126464 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 10:16:40.454242  126464 status.go:463] ha-241510 apiserver status = Running (err=<nil>)
	I1101 10:16:40.454258  126464 status.go:176] ha-241510 status: &{Name:ha-241510 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:16:40.454276  126464 status.go:174] checking status of ha-241510-m02 ...
	I1101 10:16:40.454567  126464 cli_runner.go:164] Run: docker container inspect ha-241510-m02 --format={{.State.Status}}
	I1101 10:16:40.472199  126464 status.go:371] ha-241510-m02 host status = "Stopped" (err=<nil>)
	I1101 10:16:40.472216  126464 status.go:384] host is not running, skipping remaining checks
	I1101 10:16:40.472222  126464 status.go:176] ha-241510-m02 status: &{Name:ha-241510-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:16:40.472255  126464 status.go:174] checking status of ha-241510-m03 ...
	I1101 10:16:40.472555  126464 cli_runner.go:164] Run: docker container inspect ha-241510-m03 --format={{.State.Status}}
	I1101 10:16:40.489015  126464 status.go:371] ha-241510-m03 host status = "Running" (err=<nil>)
	I1101 10:16:40.489038  126464 host.go:66] Checking if "ha-241510-m03" exists ...
	I1101 10:16:40.489279  126464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-241510-m03
	I1101 10:16:40.505320  126464 host.go:66] Checking if "ha-241510-m03" exists ...
	I1101 10:16:40.505621  126464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:40.505665  126464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-241510-m03
	I1101 10:16:40.521999  126464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/ha-241510-m03/id_rsa Username:docker}
	I1101 10:16:40.619253  126464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:40.631371  126464 kubeconfig.go:125] found "ha-241510" server: "https://192.168.49.254:8443"
	I1101 10:16:40.631404  126464 api_server.go:166] Checking apiserver status ...
	I1101 10:16:40.631452  126464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:16:40.641868  126464 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W1101 10:16:40.650195  126464 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:16:40.650260  126464 ssh_runner.go:195] Run: ls
	I1101 10:16:40.653858  126464 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 10:16:40.658871  126464 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 10:16:40.658893  126464 status.go:463] ha-241510-m03 apiserver status = Running (err=<nil>)
	I1101 10:16:40.658902  126464 status.go:176] ha-241510-m03 status: &{Name:ha-241510-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:16:40.658920  126464 status.go:174] checking status of ha-241510-m04 ...
	I1101 10:16:40.659202  126464 cli_runner.go:164] Run: docker container inspect ha-241510-m04 --format={{.State.Status}}
	I1101 10:16:40.676992  126464 status.go:371] ha-241510-m04 host status = "Running" (err=<nil>)
	I1101 10:16:40.677012  126464 host.go:66] Checking if "ha-241510-m04" exists ...
	I1101 10:16:40.677284  126464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-241510-m04
	I1101 10:16:40.693927  126464 host.go:66] Checking if "ha-241510-m04" exists ...
	I1101 10:16:40.694223  126464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:16:40.694281  126464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-241510-m04
	I1101 10:16:40.711588  126464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/ha-241510-m04/id_rsa Username:docker}
	I1101 10:16:40.808657  126464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:16:40.820740  126464 status.go:176] ha-241510-m04 status: &{Name:ha-241510-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 node start m02 --alsologtostderr -v 5: (13.727188585s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 stop --alsologtostderr -v 5
E1101 10:17:18.447729   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 stop --alsologtostderr -v 5: (43.531279678s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 start --wait true --alsologtostderr -v 5
E1101 10:18:11.412561   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:11.418991   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:11.430436   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:11.451805   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:11.493989   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:11.576054   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:11.738764   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:12.060918   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:12.702437   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:13.984149   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:16.546251   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:21.667979   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:18:31.909631   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 start --wait true --alsologtostderr -v 5: (58.511130237s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (102.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 node delete m03 --alsologtostderr -v 5
E1101 10:18:41.508272   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 node delete m03 --alsologtostderr -v 5: (9.69369788s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 stop --alsologtostderr -v 5
E1101 10:18:52.391744   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:19:33.353486   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 stop --alsologtostderr -v 5: (43.053773772s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5: exit status 7 (114.927569ms)

                                                
                                                
-- stdout --
	ha-241510
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-241510-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-241510-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:19:33.561138  140835 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:19:33.561436  140835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:19:33.561447  140835 out.go:374] Setting ErrFile to fd 2...
	I1101 10:19:33.561454  140835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:19:33.561778  140835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:19:33.561986  140835 out.go:368] Setting JSON to false
	I1101 10:19:33.562014  140835 mustload.go:66] Loading cluster: ha-241510
	I1101 10:19:33.562129  140835 notify.go:221] Checking for updates...
	I1101 10:19:33.562486  140835 config.go:182] Loaded profile config "ha-241510": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:19:33.562658  140835 status.go:174] checking status of ha-241510 ...
	I1101 10:19:33.563485  140835 cli_runner.go:164] Run: docker container inspect ha-241510 --format={{.State.Status}}
	I1101 10:19:33.582660  140835 status.go:371] ha-241510 host status = "Stopped" (err=<nil>)
	I1101 10:19:33.582690  140835 status.go:384] host is not running, skipping remaining checks
	I1101 10:19:33.582699  140835 status.go:176] ha-241510 status: &{Name:ha-241510 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:19:33.582738  140835 status.go:174] checking status of ha-241510-m02 ...
	I1101 10:19:33.582992  140835 cli_runner.go:164] Run: docker container inspect ha-241510-m02 --format={{.State.Status}}
	I1101 10:19:33.600894  140835 status.go:371] ha-241510-m02 host status = "Stopped" (err=<nil>)
	I1101 10:19:33.600910  140835 status.go:384] host is not running, skipping remaining checks
	I1101 10:19:33.600916  140835 status.go:176] ha-241510-m02 status: &{Name:ha-241510-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:19:33.600934  140835 status.go:174] checking status of ha-241510-m04 ...
	I1101 10:19:33.601141  140835 cli_runner.go:164] Run: docker container inspect ha-241510-m04 --format={{.State.Status}}
	I1101 10:19:33.617663  140835 status.go:371] ha-241510-m04 host status = "Stopped" (err=<nil>)
	I1101 10:19:33.617681  140835 status.go:384] host is not running, skipping remaining checks
	I1101 10:19:33.617687  140835 status.go:176] ha-241510-m04 status: &{Name:ha-241510-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.806478432s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 node add --control-plane --alsologtostderr -v 5
E1101 10:20:55.278601   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-241510 node add --control-plane --alsologtostderr -v 5: (41.535084698s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-241510 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-762511 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-762511 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.757695109s)
--- PASS: TestJSONOutput/start/Command (40.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-762511 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-762511 --output=json --user=testUser: (7.959698966s)
--- PASS: TestJSONOutput/stop/Command (7.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-542412 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-542412 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.606331ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"37c5902e-487a-4c60-a061-8fd8cd9387a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-542412] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f7f1e4d-45f5-4a44-acbc-d6c822eeec45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21830"}}
	{"specversion":"1.0","id":"227aa4e1-b02b-4276-8dd8-690853ea33a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"86148f35-abd0-45b6-ad9b-465f1246b933","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig"}}
	{"specversion":"1.0","id":"9e5c5996-f606-47e1-a632-a9171082b71c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube"}}
	{"specversion":"1.0","id":"1783bf51-2e96-47ed-8cbc-f552fb54e608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"47ccb60c-507d-4d44-99d0-581e389cd087","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2c4225e0-0da3-49e8-93dd-a4e83cd73de0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-542412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-542412
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-292582 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-292582 --network=: (36.079017795s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-292582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-292582
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-292582: (2.14551524s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.24s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-654992 --network=bridge
E1101 10:23:11.413864   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-654992 --network=bridge: (21.691475774s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-654992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-654992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-654992: (1.976161722s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.69s)

                                                
                                    
x
+
TestKicExistingNetwork (23.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 10:23:23.654906   61522 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 10:23:23.671111   61522 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 10:23:23.671229   61522 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 10:23:23.671273   61522 cli_runner.go:164] Run: docker network inspect existing-network
W1101 10:23:23.687114   61522 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 10:23:23.687143   61522 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 10:23:23.687185   61522 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 10:23:23.687343   61522 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 10:23:23.703175   61522 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ac7093b735a5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:19:58:44:be:58} reservation:<nil>}
I1101 10:23:23.703582   61522 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000305a90}
I1101 10:23:23.703609   61522 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 10:23:23.703651   61522 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 10:23:23.757706   61522 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-732409 --network=existing-network
E1101 10:23:39.120448   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-732409 --network=existing-network: (21.342976821s)
helpers_test.go:175: Cleaning up "existing-network-732409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-732409
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-732409: (1.952509452s)
I1101 10:23:47.072720   61522 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.43s)

                                                
                                    
x
+
TestKicCustomSubnet (24.84s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-469520 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-469520 --subnet=192.168.60.0/24: (22.653249502s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-469520 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-469520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-469520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-469520: (2.163625957s)
--- PASS: TestKicCustomSubnet (24.84s)

                                                
                                    
x
+
TestKicStaticIP (25.93s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-656332 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-656332 --static-ip=192.168.200.200: (23.692283203s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-656332 ip
helpers_test.go:175: Cleaning up "static-ip-656332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-656332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-656332: (2.101365531s)
--- PASS: TestKicStaticIP (25.93s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-685224 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-685224 --driver=docker  --container-runtime=crio: (21.19874401s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-687421 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-687421 --driver=docker  --container-runtime=crio: (21.690484042s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-685224
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-687421
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-687421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-687421
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-687421: (2.354710285s)
helpers_test.go:175: Cleaning up "first-685224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-685224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-685224: (2.365608127s)
--- PASS: TestMinikubeProfile (48.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-727625 --memory=3072 --mount-string /tmp/TestMountStartserial105388765/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-727625 --memory=3072 --mount-string /tmp/TestMountStartserial105388765/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.152876693s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-727625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-740590 --memory=3072 --mount-string /tmp/TestMountStartserial105388765/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-740590 --memory=3072 --mount-string /tmp/TestMountStartserial105388765/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.455354193s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-740590 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-727625 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-727625 --alsologtostderr -v=5: (1.70392855s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-740590 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-740590
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-740590: (1.244664036s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-740590
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-740590: (6.979483592s)
--- PASS: TestMountStart/serial/RestartStopped (7.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-740590 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063264 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-063264 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.756046284s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-063264 -- rollout status deployment/busybox: (3.259074198s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-mf8tv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-nsfnh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-mf8tv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-nsfnh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-mf8tv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-nsfnh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-mf8tv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-mf8tv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-nsfnh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-063264 -- exec busybox-7b57f96db7-nsfnh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-063264 -v=5 --alsologtostderr
E1101 10:27:18.445196   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-063264 -v=5 --alsologtostderr: (23.469145805s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-063264 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp testdata/cp-test.txt multinode-063264:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp multinode-063264:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile222722432/001/cp-test_multinode-063264.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp multinode-063264:/home/docker/cp-test.txt multinode-063264-m02:/home/docker/cp-test_multinode-063264_multinode-063264-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m02 "sudo cat /home/docker/cp-test_multinode-063264_multinode-063264-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp multinode-063264:/home/docker/cp-test.txt multinode-063264-m03:/home/docker/cp-test_multinode-063264_multinode-063264-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m03 "sudo cat /home/docker/cp-test_multinode-063264_multinode-063264-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp testdata/cp-test.txt multinode-063264-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp multinode-063264-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile222722432/001/cp-test_multinode-063264-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp multinode-063264-m02:/home/docker/cp-test.txt multinode-063264:/home/docker/cp-test_multinode-063264-m02_multinode-063264.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264 "sudo cat /home/docker/cp-test_multinode-063264-m02_multinode-063264.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp multinode-063264-m02:/home/docker/cp-test.txt multinode-063264-m03:/home/docker/cp-test_multinode-063264-m02_multinode-063264-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m03 "sudo cat /home/docker/cp-test_multinode-063264-m02_multinode-063264-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp testdata/cp-test.txt multinode-063264-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp multinode-063264-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile222722432/001/cp-test_multinode-063264-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp multinode-063264-m03:/home/docker/cp-test.txt multinode-063264:/home/docker/cp-test_multinode-063264-m03_multinode-063264.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264 "sudo cat /home/docker/cp-test_multinode-063264-m03_multinode-063264.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 cp multinode-063264-m03:/home/docker/cp-test.txt multinode-063264-m02:/home/docker/cp-test_multinode-063264-m03_multinode-063264-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 ssh -n multinode-063264-m02 "sudo cat /home/docker/cp-test_multinode-063264-m03_multinode-063264-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-063264 node stop m03: (1.260345295s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-063264 status: exit status 7 (495.695792ms)

                                                
                                                
-- stdout --
	multinode-063264
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-063264-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-063264-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-063264 status --alsologtostderr: exit status 7 (482.870653ms)

                                                
                                                
-- stdout --
	multinode-063264
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-063264-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-063264-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:27:39.136330  200605 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:27:39.136640  200605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:27:39.136650  200605 out.go:374] Setting ErrFile to fd 2...
	I1101 10:27:39.136656  200605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:27:39.137428  200605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:27:39.137966  200605 out.go:368] Setting JSON to false
	I1101 10:27:39.137993  200605 mustload.go:66] Loading cluster: multinode-063264
	I1101 10:27:39.138078  200605 notify.go:221] Checking for updates...
	I1101 10:27:39.138407  200605 config.go:182] Loaded profile config "multinode-063264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:27:39.138425  200605 status.go:174] checking status of multinode-063264 ...
	I1101 10:27:39.138857  200605 cli_runner.go:164] Run: docker container inspect multinode-063264 --format={{.State.Status}}
	I1101 10:27:39.156013  200605 status.go:371] multinode-063264 host status = "Running" (err=<nil>)
	I1101 10:27:39.156032  200605 host.go:66] Checking if "multinode-063264" exists ...
	I1101 10:27:39.156265  200605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-063264
	I1101 10:27:39.172479  200605 host.go:66] Checking if "multinode-063264" exists ...
	I1101 10:27:39.172761  200605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:27:39.172795  200605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-063264
	I1101 10:27:39.189031  200605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/multinode-063264/id_rsa Username:docker}
	I1101 10:27:39.284866  200605 ssh_runner.go:195] Run: systemctl --version
	I1101 10:27:39.290905  200605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:27:39.302316  200605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:27:39.356181  200605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 10:27:39.347363146 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:27:39.356706  200605 kubeconfig.go:125] found "multinode-063264" server: "https://192.168.67.2:8443"
	I1101 10:27:39.356732  200605 api_server.go:166] Checking apiserver status ...
	I1101 10:27:39.356769  200605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:27:39.368072  200605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1247/cgroup
	W1101 10:27:39.376322  200605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1247/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:27:39.376373  200605 ssh_runner.go:195] Run: ls
	I1101 10:27:39.379829  200605 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 10:27:39.383885  200605 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 10:27:39.383909  200605 status.go:463] multinode-063264 apiserver status = Running (err=<nil>)
	I1101 10:27:39.383923  200605 status.go:176] multinode-063264 status: &{Name:multinode-063264 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:27:39.383995  200605 status.go:174] checking status of multinode-063264-m02 ...
	I1101 10:27:39.384260  200605 cli_runner.go:164] Run: docker container inspect multinode-063264-m02 --format={{.State.Status}}
	I1101 10:27:39.401109  200605 status.go:371] multinode-063264-m02 host status = "Running" (err=<nil>)
	I1101 10:27:39.401134  200605 host.go:66] Checking if "multinode-063264-m02" exists ...
	I1101 10:27:39.401397  200605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-063264-m02
	I1101 10:27:39.417766  200605 host.go:66] Checking if "multinode-063264-m02" exists ...
	I1101 10:27:39.418024  200605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:27:39.418062  200605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-063264-m02
	I1101 10:27:39.434257  200605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21830-58021/.minikube/machines/multinode-063264-m02/id_rsa Username:docker}
	I1101 10:27:39.529390  200605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:27:39.541269  200605 status.go:176] multinode-063264-m02 status: &{Name:multinode-063264-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:27:39.541305  200605 status.go:174] checking status of multinode-063264-m03 ...
	I1101 10:27:39.541600  200605 cli_runner.go:164] Run: docker container inspect multinode-063264-m03 --format={{.State.Status}}
	I1101 10:27:39.558325  200605 status.go:371] multinode-063264-m03 host status = "Stopped" (err=<nil>)
	I1101 10:27:39.558344  200605 status.go:384] host is not running, skipping remaining checks
	I1101 10:27:39.558350  200605 status.go:176] multinode-063264-m03 status: &{Name:multinode-063264-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-063264 node start m03 -v=5 --alsologtostderr: (6.389118191s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-063264
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-063264
E1101 10:28:11.413984   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-063264: (31.237131438s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063264 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-063264 --wait=true -v=5 --alsologtostderr: (45.263009204s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-063264
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-063264 node delete m03: (4.654053819s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-063264 stop: (28.191778017s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-063264 status: exit status 7 (94.135468ms)

                                                
                                                
-- stdout --
	multinode-063264
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-063264-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-063264 status --alsologtostderr: exit status 7 (97.610997ms)

                                                
                                                
-- stdout --
	multinode-063264
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-063264-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:29:36.859367  210336 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:29:36.859637  210336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:29:36.859647  210336 out.go:374] Setting ErrFile to fd 2...
	I1101 10:29:36.859651  210336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:29:36.859859  210336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:29:36.860031  210336 out.go:368] Setting JSON to false
	I1101 10:29:36.860056  210336 mustload.go:66] Loading cluster: multinode-063264
	I1101 10:29:36.860115  210336 notify.go:221] Checking for updates...
	I1101 10:29:36.860640  210336 config.go:182] Loaded profile config "multinode-063264": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:29:36.860662  210336 status.go:174] checking status of multinode-063264 ...
	I1101 10:29:36.861259  210336 cli_runner.go:164] Run: docker container inspect multinode-063264 --format={{.State.Status}}
	I1101 10:29:36.881276  210336 status.go:371] multinode-063264 host status = "Stopped" (err=<nil>)
	I1101 10:29:36.881295  210336 status.go:384] host is not running, skipping remaining checks
	I1101 10:29:36.881301  210336 status.go:176] multinode-063264 status: &{Name:multinode-063264 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:29:36.881335  210336 status.go:174] checking status of multinode-063264-m02 ...
	I1101 10:29:36.881620  210336 cli_runner.go:164] Run: docker container inspect multinode-063264-m02 --format={{.State.Status}}
	I1101 10:29:36.898749  210336 status.go:371] multinode-063264-m02 host status = "Stopped" (err=<nil>)
	I1101 10:29:36.898768  210336 status.go:384] host is not running, skipping remaining checks
	I1101 10:29:36.898784  210336 status.go:176] multinode-063264-m02 status: &{Name:multinode-063264-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (27.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063264 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-063264 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (27.23915688s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-063264 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (27.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-063264
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063264-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-063264-m02 --driver=docker  --container-runtime=crio: exit status 14 (80.370577ms)

                                                
                                                
-- stdout --
	* [multinode-063264-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-063264-m02' is duplicated with machine name 'multinode-063264-m02' in profile 'multinode-063264'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-063264-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-063264-m03 --driver=docker  --container-runtime=crio: (20.827282248s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-063264
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-063264: exit status 80 (287.17272ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-063264 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-063264-m03 already exists in multinode-063264-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-063264-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-063264-m03: (2.403204229s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.66s)

                                                
                                    
x
+
TestPreload (94.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-445718 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-445718 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (48.231525893s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-445718 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-445718 image pull gcr.io/k8s-minikube/busybox: (2.462465626s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-445718
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-445718: (5.810236729s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-445718 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-445718 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (34.965912293s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-445718 image list
helpers_test.go:175: Cleaning up "test-preload-445718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-445718
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-445718: (2.42953114s)
--- PASS: TestPreload (94.14s)

                                                
                                    
x
+
TestScheduledStopUnix (96.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-134850 --memory=3072 --driver=docker  --container-runtime=crio
E1101 10:32:18.447971   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-134850 --memory=3072 --driver=docker  --container-runtime=crio: (20.126371265s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134850 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-134850 -n scheduled-stop-134850
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134850 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 10:32:27.304541   61522 retry.go:31] will retry after 110.402µs: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.305725   61522 retry.go:31] will retry after 163.518µs: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.306866   61522 retry.go:31] will retry after 199.556µs: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.308024   61522 retry.go:31] will retry after 214.95µs: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.309153   61522 retry.go:31] will retry after 658.856µs: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.310268   61522 retry.go:31] will retry after 948.236µs: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.311412   61522 retry.go:31] will retry after 1.221777ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.313636   61522 retry.go:31] will retry after 2.380804ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.316876   61522 retry.go:31] will retry after 1.419756ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.319082   61522 retry.go:31] will retry after 3.588039ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.323256   61522 retry.go:31] will retry after 7.127634ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.331468   61522 retry.go:31] will retry after 5.276185ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.337680   61522 retry.go:31] will retry after 12.44347ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.354129   61522 retry.go:31] will retry after 11.30791ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.366359   61522 retry.go:31] will retry after 31.67816ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
I1101 10:32:27.398923   61522 retry.go:31] will retry after 59.918435ms: open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/scheduled-stop-134850/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134850 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-134850 -n scheduled-stop-134850
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-134850
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134850 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1101 10:33:11.414133   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-134850
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-134850: exit status 7 (78.513339ms)

                                                
                                                
-- stdout --
	scheduled-stop-134850
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-134850 -n scheduled-stop-134850
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-134850 -n scheduled-stop-134850: exit status 7 (76.165045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-134850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-134850
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-134850: (5.357340787s)
--- PASS: TestScheduledStopUnix (96.99s)

                                                
                                    
x
+
TestInsufficientStorage (9.57s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-638294 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-638294 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.099341885s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ee07cca0-c07b-4e06-ac50-2c7f2525be93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-638294] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"163f521b-3fff-4f31-906a-f2c588a9c080","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21830"}}
	{"specversion":"1.0","id":"b8e6ab9e-9846-4439-add1-35591b72a989","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"47b341e1-8b1e-4c02-a516-992910e0fe58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig"}}
	{"specversion":"1.0","id":"a8055240-5210-4661-8dfb-ab33f14c8bb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube"}}
	{"specversion":"1.0","id":"732d22bc-5cc1-49c5-93df-6f9fa04c1d14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"271494d6-c4da-4b82-a0a0-3801e42f6477","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"25df1bb0-075a-4f0f-9ba1-448a807aa2c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"39915322-fa4c-40bd-a6ca-0ebdfc383cfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0b277a00-28ae-42d4-b171-4d45e426d57f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"799a09b9-c1ac-46bc-a52f-1e73bcaaa64e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"87f61a3c-5146-475a-a3fe-56afe8df35bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-638294\" primary control-plane node in \"insufficient-storage-638294\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8b1b6cb-4b69-41c7-beb5-43193f82b003","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ed9dd95-f890-4c33-bcd5-a88a55e3871c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bae15d4f-49dd-44a1-b706-7580910e4721","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-638294 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-638294 --output=json --layout=cluster: exit status 7 (288.53991ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-638294","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-638294","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 10:33:51.097416  230501 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-638294" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-638294 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-638294 --output=json --layout=cluster: exit status 7 (282.503583ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-638294","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-638294","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 10:33:51.380144  230613 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-638294" does not appear in /home/jenkins/minikube-integration/21830-58021/kubeconfig
	E1101 10:33:51.390528  230613 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/insufficient-storage-638294/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-638294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-638294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-638294: (1.895571146s)
--- PASS: TestInsufficientStorage (9.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2281819574 start -p running-upgrade-376123 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1101 10:35:21.509992   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2281819574 start -p running-upgrade-376123 --memory=3072 --vm-driver=docker  --container-runtime=crio: (26.640945235s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-376123 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-376123 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.341925686s)
helpers_test.go:175: Cleaning up "running-upgrade-376123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-376123
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-376123: (2.426060455s)
--- PASS: TestRunningBinaryUpgrade (56.51s)

                                                
                                    
x
+
TestKubernetesUpgrade (313.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-896514 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-896514 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.195731672s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-896514
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-896514: (9.326949352s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-896514 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-896514 status --format={{.Host}}: exit status 7 (97.764592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-896514 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 10:34:34.482230   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-896514 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.49247212s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-896514 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-896514 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-896514 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (82.552522ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-896514] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-896514
	    minikube start -p kubernetes-upgrade-896514 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8965142 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-896514 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-896514 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-896514 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.06606151s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-896514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-896514
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-896514: (2.545935392s)
--- PASS: TestKubernetesUpgrade (313.86s)

                                                
                                    
x
+
TestMissingContainerUpgrade (113.86s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1096939547 start -p missing-upgrade-834138 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1096939547 start -p missing-upgrade-834138 --memory=3072 --driver=docker  --container-runtime=crio: (1m1.631541941s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-834138
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-834138: (1.76087549s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-834138
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-834138 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-834138 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.175316083s)
helpers_test.go:175: Cleaning up "missing-upgrade-834138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-834138
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-834138: (2.589157982s)
--- PASS: TestMissingContainerUpgrade (113.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.830231728 start -p stopped-upgrade-818439 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.830231728 start -p stopped-upgrade-818439 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m1.21541064s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.830231728 -p stopped-upgrade-818439 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.830231728 -p stopped-upgrade-818439 stop: (1.870517674s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-818439 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-818439 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.700845092s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (77.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-818439
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-818439: (1.016707636s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-299863 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-299863 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (174.270102ms)

                                                
                                                
-- stdout --
	* [false-299863] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:35:17.295946  247840 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:35:17.296199  247840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:35:17.296209  247840 out.go:374] Setting ErrFile to fd 2...
	I1101 10:35:17.296212  247840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:35:17.296464  247840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-58021/.minikube/bin
	I1101 10:35:17.296956  247840 out.go:368] Setting JSON to false
	I1101 10:35:17.297974  247840 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8257,"bootTime":1761985060,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:35:17.298061  247840 start.go:143] virtualization: kvm guest
	I1101 10:35:17.299986  247840 out.go:179] * [false-299863] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:35:17.301082  247840 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:35:17.301115  247840 notify.go:221] Checking for updates...
	I1101 10:35:17.303201  247840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:35:17.304226  247840 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	I1101 10:35:17.305319  247840 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	I1101 10:35:17.306568  247840 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:35:17.311004  247840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:35:17.312833  247840 config.go:182] Loaded profile config "kubernetes-upgrade-896514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:35:17.312976  247840 config.go:182] Loaded profile config "missing-upgrade-834138": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 10:35:17.313114  247840 config.go:182] Loaded profile config "stopped-upgrade-818439": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 10:35:17.313266  247840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:35:17.338936  247840 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 10:35:17.339028  247840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 10:35:17.400826  247840 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:false NGoroutines:63 SystemTime:2025-11-01 10:35:17.390242729 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 10:35:17.400929  247840 docker.go:319] overlay module found
	I1101 10:35:17.402680  247840 out.go:179] * Using the docker driver based on user configuration
	I1101 10:35:17.405350  247840 start.go:309] selected driver: docker
	I1101 10:35:17.405367  247840 start.go:930] validating driver "docker" against <nil>
	I1101 10:35:17.405391  247840 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:35:17.406927  247840 out.go:203] 
	W1101 10:35:17.407756  247840 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 10:35:17.408629  247840 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-299863 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-299863" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:34:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-896514
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:34:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-834138
contexts:
- context:
cluster: kubernetes-upgrade-896514
user: kubernetes-upgrade-896514
name: kubernetes-upgrade-896514
- context:
cluster: missing-upgrade-834138
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:34:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-834138
name: missing-upgrade-834138
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-896514
user:
client-certificate: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/kubernetes-upgrade-896514/client.crt
client-key: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/kubernetes-upgrade-896514/client.key
- name: missing-upgrade-834138
user:
client-certificate: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/missing-upgrade-834138/client.crt
client-key: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/missing-upgrade-834138/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-299863

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-299863"

                                                
                                                
----------------------- debugLogs end: false-299863 [took: 3.202364819s] --------------------------------
helpers_test.go:175: Cleaning up "false-299863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-299863
--- PASS: TestNetworkPlugins/group/false (3.53s)

                                                
                                    
x
+
TestPause/serial/Start (44.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-405879 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-405879 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.628195937s)
--- PASS: TestPause/serial/Start (44.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585638 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-585638 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (108.758743ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-585638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-58021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-58021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (21.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585638 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-585638 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.127653157s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-585638 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (21.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585638 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-585638 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.899360516s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-585638 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-585638 status -o json: exit status 2 (351.911469ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-585638","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-585638
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-585638: (2.033653664s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-405879 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-405879 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.130275992s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585638 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-585638 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.750090173s)
--- PASS: TestNoKubernetes/serial/Start (7.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-585638 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-585638 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.105359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (16.610247282s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.583878582s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-585638
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-585638: (1.298383033s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-585638 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-585638 --driver=docker  --container-runtime=crio: (7.690015611s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-585638 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-585638 "sudo systemctl is-active --quiet service kubelet": exit status 1 (310.470811ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1101 10:37:18.445162   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.119741917s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (45.696530121s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-299863 "pgrep -a kubelet"
I1101 10:37:58.743014   61522 config.go:182] Loaded profile config "auto-299863": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-299863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x6tv7" [dd27fcda-2f15-46cc-9ef9-13d92cad2dde] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x6tv7" [dd27fcda-2f15-46cc-9ef9-13d92cad2dde] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004484232s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-c8gxz" [3d0d1976-67f6-42f1-9bcb-52735f42cf06] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00369107s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-299863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-299863 "pgrep -a kubelet"
I1101 10:38:10.446744   61522 config.go:182] Loaded profile config "flannel-299863": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (7.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-299863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-87kct" [eddd5881-0f23-4326-924c-b4a7e4103684] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:38:11.412615   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-87kct" [eddd5881-0f23-4326-924c-b4a7e4103684] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 7.004283327s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (7.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-299863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m7.714123724s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (35.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (35.968568751s)
--- PASS: TestNetworkPlugins/group/bridge/Start (35.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.009970878s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-299863 "pgrep -a kubelet"
I1101 10:39:14.761232   61522 config.go:182] Loaded profile config "bridge-299863": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (7.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-299863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gj9hx" [f553a983-b155-415f-a92b-557f55d1c3d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gj9hx" [f553a983-b155-415f-a92b-557f55d1c3d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 7.003520164s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (7.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-299863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-299863 "pgrep -a kubelet"
I1101 10:39:35.228683   61522 config.go:182] Loaded profile config "enable-default-cni-299863": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-299863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z8cvp" [50098c09-5585-4d6d-bf71-7f0490fe2f51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z8cvp" [50098c09-5585-4d6d-bf71-7f0490fe2f51] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006684167s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.972183393s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-299863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-kv67z" [d96d90b0-50ce-4905-8aa9-e36bc019cd96] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004197587s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-299863 "pgrep -a kubelet"
I1101 10:40:03.523356   61522 config.go:182] Loaded profile config "calico-299863": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-299863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rtpfd" [736971e2-5d95-4e6e-8377-4d8dd5814aac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rtpfd" [736971e2-5d95-4e6e-8377-4d8dd5814aac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.005604627s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-299863 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.237144512s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-299863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.893493544s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-wlxcl" [4dce7511-532a-407c-b17c-0c7ce39914ea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003946299s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-299863 "pgrep -a kubelet"
I1101 10:40:31.307083   61522 config.go:182] Loaded profile config "kindnet-299863": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-299863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l6vww" [93daa30c-536b-4bae-a440-1124dd01d1fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l6vww" [93daa30c-536b-4bae-a440-1124dd01d1fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004888848s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.587450513s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-299863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-299863 "pgrep -a kubelet"
I1101 10:40:57.448220   61522 config.go:182] Loaded profile config "custom-flannel-299863": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-299863 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zxh7q" [b3ebed42-5522-42a0-bead-eed8ad5903c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zxh7q" [b3ebed42-5522-42a0-bead-eed8ad5903c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.005840475s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.117423557s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-707467 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [19c1aad2-c5a5-4e04-b902-4eb808a4b2de] Pending
helpers_test.go:352: "busybox" [19c1aad2-c5a5-4e04-b902-4eb808a4b2de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [19c1aad2-c5a5-4e04-b902-4eb808a4b2de] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004498645s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-707467 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-299863 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-299863 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-707467 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-707467 --alsologtostderr -v=3: (16.851803576s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m12.905393533s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-753486 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9dd3f019-b2ff-48ef-871e-baed334b2205] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9dd3f019-b2ff-48ef-871e-baed334b2205] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003415873s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-753486 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-707467 -n old-k8s-version-707467
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-707467 -n old-k8s-version-707467: exit status 7 (96.078195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-707467 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-707467 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.676601888s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-707467 -n old-k8s-version-707467
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-753486 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-753486 --alsologtostderr -v=3: (16.289795023s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-071527 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [38d217fc-2e74-49ba-9a94-b40059463772] Pending
helpers_test.go:352: "busybox" [38d217fc-2e74-49ba-9a94-b40059463772] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [38d217fc-2e74-49ba-9a94-b40059463772] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003122766s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-071527 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-071527 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-071527 --alsologtostderr -v=3: (17.163090554s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753486 -n no-preload-753486
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753486 -n no-preload-753486: exit status 7 (80.312009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-753486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (28.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-753486 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (28.20399909s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-753486 -n no-preload-753486
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (28.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071527 -n embed-certs-071527
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071527 -n embed-certs-071527: exit status 7 (79.51101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-071527 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:42:18.444700   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/addons-407417/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-071527 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.080041422s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-071527 -n embed-certs-071527
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8b57h" [925c0c6b-e42b-4d12-b067-bbaf38b602ed] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004308606s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-d6xpb" [a3296379-d073-4ef5-882d-36bc6b0d6961] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004770632s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8b57h" [925c0c6b-e42b-4d12-b067-bbaf38b602ed] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00342748s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-753486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-d6xpb" [a3296379-d073-4ef5-882d-36bc6b0d6961] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003980249s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-707467 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-753486 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-707467 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-433711 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [59420294-cb51-4139-83a6-0ab57cb66dde] Pending
helpers_test.go:352: "busybox" [59420294-cb51-4139-83a6-0ab57cb66dde] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [59420294-cb51-4139-83a6-0ab57cb66dde] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003711651s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-433711 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (24.421032004s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-433711 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-433711 --alsologtostderr -v=3: (16.311569057s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z9755" [6dab890d-627d-40a2-9c5b-8d97fd92cc9e] Running
E1101 10:42:58.938106   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:42:58.944545   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:42:58.955903   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:42:58.977252   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:42:59.018652   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:42:59.100166   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:42:59.261675   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:42:59.583359   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:00.224738   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:01.506397   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:04.068529   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/auto-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:04.161105   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:04.167539   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:04.179009   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:04.200423   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:04.241836   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:04.323336   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:04.485594   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:04.807334   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00345656s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z9755" [6dab890d-627d-40a2-9c5b-8d97fd92cc9e] Running
E1101 10:43:05.448928   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:06.730964   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003469011s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-071527 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711: exit status 7 (89.664663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-433711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1101 10:43:09.292864   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-433711 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.717299252s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-433711 -n default-k8s-diff-port-433711
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-071527 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-336923 --alsologtostderr -v=3
E1101 10:43:11.412009   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/functional-638125/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-336923 --alsologtostderr -v=3: (8.155465271s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336923 -n newest-cni-336923
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336923 -n newest-cni-336923: exit status 7 (98.891112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-336923 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:43:24.656137   61522 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/flannel-299863/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-336923 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (12.017188069s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336923 -n newest-cni-336923
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-336923 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fbhvp" [fd3ea554-304d-4143-ab2e-461ce7d2077c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003556826s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fbhvp" [fd3ea554-304d-4143-ab2e-461ce7d2077c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004008603s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-433711 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-433711 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-299863 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-299863" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:34:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-896514
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:34:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-834138
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:35:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: stopped-upgrade-818439
contexts:
- context:
cluster: kubernetes-upgrade-896514
user: kubernetes-upgrade-896514
name: kubernetes-upgrade-896514
- context:
cluster: missing-upgrade-834138
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:34:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-834138
name: missing-upgrade-834138
- context:
cluster: stopped-upgrade-818439
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:35:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: stopped-upgrade-818439
name: stopped-upgrade-818439
current-context: stopped-upgrade-818439
kind: Config
users:
- name: kubernetes-upgrade-896514
user:
client-certificate: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/kubernetes-upgrade-896514/client.crt
client-key: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/kubernetes-upgrade-896514/client.key
- name: missing-upgrade-834138
user:
client-certificate: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/missing-upgrade-834138/client.crt
client-key: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/missing-upgrade-834138/client.key
- name: stopped-upgrade-818439
user:
client-certificate: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/stopped-upgrade-818439/client.crt
client-key: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/stopped-upgrade-818439/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-299863

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-299863"

                                                
                                                
----------------------- debugLogs end: kubenet-299863 [took: 3.495009194s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-299863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-299863
--- SKIP: TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-299863 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-299863" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:34:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-896514
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21830-58021/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:34:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-834138
contexts:
- context:
cluster: kubernetes-upgrade-896514
user: kubernetes-upgrade-896514
name: kubernetes-upgrade-896514
- context:
cluster: missing-upgrade-834138
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:34:57 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-834138
name: missing-upgrade-834138
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-896514
user:
client-certificate: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/kubernetes-upgrade-896514/client.crt
client-key: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/kubernetes-upgrade-896514/client.key
- name: missing-upgrade-834138
user:
client-certificate: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/missing-upgrade-834138/client.crt
client-key: /home/jenkins/minikube-integration/21830-58021/.minikube/profiles/missing-upgrade-834138/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-299863

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-299863" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-299863"

                                                
                                                
----------------------- debugLogs end: cilium-299863 [took: 4.026449762s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-299863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-299863
--- SKIP: TestNetworkPlugins/group/cilium (4.21s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-339061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-339061
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard